In the article below, we will (i) automatically find the Option (of choice) closest to At The Money (ATM) and (ii) calculate its Implied Volatility and Greeks. We focus below on Future (Monthly) Options on the Index .STOXX50E (EURO STOXX 50 EUR PRICE INDEX) ('EUREX') and .SPX (S&P 500 INDEX), although you can apply the logic below for another index. To find the ATM instrument, we simply and efficiently use the Search API. Usually, the calculation of the Black-Scholes-Merton model's Implied Volatility involves numerical techniques, since it is not a closed equation (unless restricting assumptions that log returns follow a standard normal distribution with mean is zero, $\mu$ = 0, and standard deviation is zero, $\sigma$ = 1, are made). If we used these techniques in calculating each Implied Volatility value on our computer, it would take several seconds - if not minutes - for each data point computed. I have chosen to use the Instrument Pricing Analytics (IPA) service in the Refinitiv Data Platform API Family instead, as this service allows me to send model specifications (and variables) and receive several (up to 100) computed Implied Volatility values in one go - in a few seconds. Not only does this save a great deal of time, but also many lines of code!
import refinitiv.data as rd # This is LSEG's Data and Analytics' API wrapper, called the Refinitiv Data Library for Python.
from refinitiv.data.content import historical_pricing # We will use this Python Class in `rd` to show the Implied Volatility data already available before our work.
from refinitiv.data.content import search # We will use this Python Class in `rd` to fid the instrument we are after, closest to At The Money.
import refinitiv.data.content.ipa.financial_contracts as rdf # We're going to need thtis to use the content layer of the RD library and the calculators of greeks and Impl Volat in Instrument Pricing Analytics (IPA) and Exchange Traded Instruments (ETI)
from refinitiv.data.content.ipa.financial_contracts import option # We're going to need thtis to use the content layer of the RD library and the calculators of greeks and Impl Volat in IPA & ETI
import numpy as np # We need `numpy` for mathematical and array manipilations.
import pandas as pd # We need `pandas` for datafame and array manipilations.
import calendar # We use `calendar` to identify holidays and maturity dates of intruments of interest.
import pytz # We use `pytz` to manipulate time values aiding `calendar` library. to import its types, you might need to run `!python3 -m pip install types-pytz`
import pandas_market_calendars as mcal # Used to identify holidays. See `https://github.com/rsheftel/pandas_market_calendars/blob/master/examples/usage.ipynb` for info on this market calendar library
from datetime import datetime, timedelta, timezone # We use these to manipulate time values
from dateutil.relativedelta import relativedelta # We use `relativedelta` to manipulate time values aiding `calendar` library.
import requests # We'll need this to send requests to servers vie a the delivery layer - more on that below
# `plotly` is a library used to render interactive graphs:
import plotly.graph_objects as go
import plotly.express as px # This is just to see the implied vol graph when that field is available
import matplotlib.pyplot as plt # We use `matplotlib` to just in case users do not have an environment suited to `plotly`.
from IPython.display import clear_output, display # We use `clear_output` for users who wish to loop graph production on a regular basis. We'll use this to `display` data (e.g.: pandas data-frames).
from plotly import subplots
import plotly
# Let's authenticate ourseves to LSEG's Data and Analytics service, Refinitiv:
try: # The following libraries are not available in Codebook, thus this try loop
rd.open_session(config_name="C:\\Example.DataLibrary.Python-main\\Example.DataLibrary.Python-main\\Configuration\\refinitiv-data.config.json")
rd.open_session("desktop.workspace")
except:
rd.open_session()
print(f"Here we are using the refinitiv Data Library version {rd.__version__}")
Here we are using the refinitiv Data Library version 1.0.0b20
FYI (For Your Information): We are running Python 3.8:
!python -V
Python 3.8.12
In this article, we will attempt to calculate the Implied Volatility (IV) for Future Options on 2 indexes (.STOXX50E & .SPX) trading 'ATM', meaning that the contract's strike price is at (or near - within x%) parity with (equal to) its current treading price (TRDPRC_1). We are also only looking for such Options expiring within a set time window; allowing for the option 'forever', i.e.: that expire whenever after date of calculation. To do so, we 1st have to find the option in question. To find live Options, we best use the Search API. To find Expired Options we will use functions created in Haykaz's amazing articles "Finding Expired Options and Backtesting a Short Iron Condor Strategy" & "Functions to find Option RICs traded on different exchanges"
Live Options, in this context, are Options that have not expired at time of computation. To be explicit:
As aforementioned, to find live Options, we best use the Search API: Here we look for options on .STOXX50E that mature on the 3rd friday of July 2023, 2023-07-21:
response1 = search.Definition(
view = search.Views.SEARCH_ALL, # To see what views are available: `help(search.Views)` & `search.metadata.Definition(view = search.Views.SEARCH_ALL).get_data().data.df.to_excel("SEARCH_ALL.xlsx")`
query=".STOXX50E",
select="DocumentTitle, RIC, StrikePrice, ExchangeCode, ExpiryDate, UnderlyingAsset, " +
"UnderlyingAssetName, UnderlyingAssetRIC, ESMAUnderlyingIndexCode, RCSUnderlyingMarket" +
"UnderlyingQuoteName, UnderlyingQuoteRIC",
filter="RCSAssetCategoryLeaf eq 'Option' and RIC eq 'STX*' and DocumentTitle ne '*Weekly*' " +
"and CallPutOption eq 'Call' and ExchangeCode eq 'EUX' and " +
"ExpiryDate ge 2022-07-10 and ExpiryDate lt 2023-07-22", # ge (greater than or equal to), gt (greater than), lt (less than) and le (less than or equal to). These can only be applied to numeric and date properties.
top=100,
).get_data()
searchDf1 = response1.data.df
searchDf1
| DocumentTitle | RIC | StrikePrice | ExchangeCode | ExpiryDate | UnderlyingQuoteRIC | |
|---|---|---|---|---|---|---|
| 0 | Eurex Dow Jones EURO STOXX 50 Index Option 430... | STXE43000C3.EX | 4300 | EUX | 2023-03-17 | [.STOXX50E] |
| 1 | Eurex Dow Jones EURO STOXX 50 Index Option 425... | STXE42500C3.EX | 4250 | EUX | 2023-03-17 | [.STOXX50E] |
| 2 | Eurex Dow Jones EURO STOXX 50 Index Option 420... | STXE42000C3.EX | 4200 | EUX | 2023-03-17 | [.STOXX50E] |
| 3 | Eurex Dow Jones EURO STOXX 50 Index Option 422... | STXE42250C3.EX | 4225 | EUX | 2023-03-17 | [.STOXX50E] |
| 4 | Eurex Dow Jones EURO STOXX 50 Index Option 427... | STXE42750C3.EX | 4275 | EUX | 2023-03-17 | [.STOXX50E] |
| ... | ... | ... | ... | ... | ... | ... |
| 95 | Eurex Dow Jones EURO STOXX 50 Index Option 362... | STXE36250C3.EX | 3625 | EUX | 2023-03-17 | [.STOXX50E] |
| 96 | Eurex Dow Jones EURO STOXX 50 Index Option 405... | STXE40500E3.EX | 4050 | EUX | 2023-05-19 | [.STOXX50E] |
| 97 | Eurex Dow Jones EURO STOXX 50 Index Option 447... | STXE44750D3.EX | 4475 | EUX | 2023-04-21 | [.STOXX50E] |
| 98 | Eurex Dow Jones EURO STOXX 50 Index Option 417... | STXE41750E3.EX | 4175 | EUX | 2023-05-19 | [.STOXX50E] |
| 99 | Eurex Dow Jones EURO STOXX 50 Index Option 405... | STXE40500F3.EX | 4050 | EUX | 2023-06-16 | [.STOXX50E] |
100 rows × 6 columns
Let's say the current underlying price is 3331.7EUR, now we can pick the option with strike price closest to that, i.e.: the most 'At The Money'; note that this means that the option can be in or out the money, as long as it is the closest to at the money:
currentUnderlyingPrc = rd.get_history(
universe=[searchDf1.UnderlyingQuoteRIC[0][0]],
fields=["TRDPRC_1"],
interval="tick").iloc[-1][0]
currentUnderlyingPrc
4281.93
searchDf1.iloc[(searchDf1['StrikePrice']-currentUnderlyingPrc).abs().argsort()[:1]]
| DocumentTitle | RIC | StrikePrice | ExchangeCode | ExpiryDate | UnderlyingQuoteRIC | |
|---|---|---|---|---|---|---|
| 79 | Eurex Dow Jones EURO STOXX 50 Index Option 427... | STXE42750E3.EX | 4275 | EUX | 2023-05-19 | [.STOXX50E] |
In this instance, for this Call Option, 'STXE33500G3.EX', the strike price is 3350, higher than the spot price of our underlying which is 3331.7. The holder of this 'STXE33500G3.EX' option has the right (but not the obligation) to buy the underlying for 3350EUR, which, was the price of the underlying to stay the same till expiry (3331.7EUR on 2023-07-21), means a loss of (3350 - 3331.7 =) 18.3EUR. This option in this instance is 'Out-The-Money'.
N.B.: When using the Filter in Search and playing with dates, it is good to read the API Playground Documentation; it mentions that: "Dates are written in ISO datetime format. The time portion is optional, as is the timezone (assumed to be UTC unless otherwise specified). Valid examples include 2012-03-11T17\:13:55Z, 2012-03-11T17\:13:55, 2012-03-11T12\:00-03:30, 2012-03-11.":
Most of the time, market agents will be interested in the next expiring Option, unless we are too close to it. We would not be interested, for example, in an option expiring in 1 hour, or even tomorrow, because that is so close (in time) that the information reflected in the Option's trades in the market does not represent future expectations of its underlying, but current expectations of it.
To implement such a logic, we need to know what are the expiry dates of the option that we are interested in. We are looking for a Python function narrowing our search to options expiring on the 3rd Friday of any one month. For info on this function, please read articles "Finding Expired Options and Backtesting a Short Iron Condor Strategy" & "Functions to find Option RICs traded on different exchanges"
def Get_exp_dates(year, days=True, mcal_get_calendar='EUREX'):
'''
Get_exp_dates Version 2.0:
This function gets expiration dates for a year for NDX options, which are the 3rd Fridays of each month.
Changes
----------------------------------------------
Changed from Version 1.0 to 2.0: Jonathan Legrand changed Haykaz Aramyan's original code to allow
(i) for the function's holiday argument to be changed, and defaulted to 'EUREX' as opposed to 'CBOE_Index_Options' and
(ii) for the function to output full date objects as opposed to just days of the month if agument days=True.
Dependencies
----------------------------------------------
Python library 'pandas_market_calendars' version 3.2
Parameters
-----------------------------------------------
Input:
year(int): year for which expiration days are requested
mcal_get_calendar(str): String of the calendar for which holidays have to be taken into account. More on this calendar (link to Github chacked 2022-10-11): https://github.com/rsheftel/pandas_market_calendars/blob/177e7922c7df5ad249b0d066b5c9e730a3ee8596/pandas_market_calendars/exchange_calendar_cboe.py
Default: mcal_get_calendar='EUREX'
days(bool): If True, only days of the month is outputed, else it's dataeime objects
Default: days=True
Output:
dates(dict): dictionary of expiration days for each month of a specified year in datetime.date format.
'''
# get CBOE market holidays
EUREXCal = mcal.get_calendar(mcal_get_calendar)
holidays = EUREXCal.holidays().holidays
# set calendar starting from Saturday
c = calendar.Calendar(firstweekday=calendar.SATURDAY)
# get the 3rd Friday of each month
exp_dates = {}
for i in range(1, 13):
monthcal = c.monthdatescalendar(year, i)
date = monthcal[2][-1]
# check if found date is an holiday and get the previous date if it is
if date in holidays:
date = date + timedelta(-1)
# append the date to the dictionary
if year in exp_dates:
### Changed from original code from here on by Jonathan Legrand on 2022-10-11
if days: exp_dates[year].append(date.day)
else: exp_dates[year].append(date)
else:
if days: exp_dates[year] = [date.day]
else: exp_dates[year] = [date]
return exp_dates
fullDates = Get_exp_dates(2022, days=False)
dates = Get_exp_dates(2022)
fullDatesStrDict = {i: [fullDates[i][j].strftime('%Y-%m-%d')
for j in range(len(fullDates[i]))]
for i in list(fullDates.keys())}
fullDatesDayDict = {i: [fullDates[i][j].day
for j in range(len(fullDates[i]))]
for i in list(fullDates.keys())}
print(fullDates)
{2022: [datetime.date(2022, 1, 21), datetime.date(2022, 2, 18), datetime.date(2022, 3, 18), datetime.date(2022, 4, 14), datetime.date(2022, 5, 20), datetime.date(2022, 6, 17), datetime.date(2022, 7, 15), datetime.date(2022, 8, 19), datetime.date(2022, 9, 16), datetime.date(2022, 10, 21), datetime.date(2022, 11, 18), datetime.date(2022, 12, 16)]}
print(fullDatesStrDict)
{2022: ['2022-01-21', '2022-02-18', '2022-03-18', '2022-04-14', '2022-05-20', '2022-06-17', '2022-07-15', '2022-08-19', '2022-09-16', '2022-10-21', '2022-11-18', '2022-12-16']}
print(dates)
{2022: [21, 18, 18, 14, 20, 17, 15, 19, 16, 21, 18, 16]}
print(fullDatesDayDict)
{2022: [21, 18, 18, 14, 20, 17, 15, 19, 16, 21, 18, 16]}
Most of the time, market agents will be interested in the next expiring Option, unless we are too close to it. We would not be interested, for example, in an option expiring in 1 hour, or even tomorrow, because that is so close (in time) that the information reflected in the Option's trades in the market does not represent future expectations of its underlying, but current expectations of it.
E.g.: I would like to know what is the next Future (Monthly) Option (i) on the Index '.STOXX50E' (ii) closest to ATM (i.e.: with an underlying spot price closest to the option's strike price) (ii) Expiring in more than x days (i.e.: not too close to calculated time 't'), let's say 15 days:
x = 15
timeOfCalcDatetime = datetime.now() # For now, we will focuss on the use-case where we are calculating values for today; later we will allow for it historically for any day going back a few business days.
timeOfCalcStr = datetime.now().strftime('%Y-%m-%d')
timeOfCalcStr
'2023-03-08'
fullDatesAtTimeOfCalc = Get_exp_dates(timeOfCalcDatetime.year, days=False) # `timeOfCalcDatetime.year` here is 2023
fullDatesAtTimeOfCalcDatetime = [
datetime(i.year, i.month, i.day)
for i in fullDatesAtTimeOfCalc[list(fullDatesAtTimeOfCalc.keys())[0]]]
print(fullDatesAtTimeOfCalcDatetime)
[datetime.datetime(2023, 1, 20, 0, 0), datetime.datetime(2023, 2, 17, 0, 0), datetime.datetime(2023, 3, 17, 0, 0), datetime.datetime(2023, 4, 21, 0, 0), datetime.datetime(2023, 5, 19, 0, 0), datetime.datetime(2023, 6, 16, 0, 0), datetime.datetime(2023, 7, 21, 0, 0), datetime.datetime(2023, 8, 18, 0, 0), datetime.datetime(2023, 9, 15, 0, 0), datetime.datetime(2023, 10, 20, 0, 0), datetime.datetime(2023, 11, 17, 0, 0), datetime.datetime(2023, 12, 15, 0, 0)]
expiryDateOfInt = [i for i in fullDatesAtTimeOfCalcDatetime
if i > timeOfCalcDatetime + relativedelta(days=x)][0]
expiryDateOfInt
datetime.datetime(2023, 4, 21, 0, 0)
Now we can look for the one option we're after:
response2 = search.Definition(
view=search.Views.SEARCH_ALL, # To see what views are available: `help(search.Views)` & `search.metadata.Definition(view = search.Views.SEARCH_ALL).get_data().data.df.to_excel("SEARCH_ALL.xlsx")`
query=".STOXX50E",
select="DocumentTitle, RIC, StrikePrice, ExchangeCode, ExpiryDate, UnderlyingAsset, " +
"UnderlyingAssetName, UnderlyingAssetRIC, ESMAUnderlyingIndexCode, RCSUnderlyingMarket" +
"UnderlyingQuoteName, UnderlyingQuoteRIC",
filter="RCSAssetCategoryLeaf eq 'Option' and RIC eq 'STX*' and DocumentTitle ne '*Weekly*' " +
"and CallPutOption eq 'Call' and ExchangeCode eq 'EUX' and " +
f"ExpiryDate ge {(expiryDateOfInt - relativedelta(days=1)).strftime('%Y-%m-%d')} " +
f"and ExpiryDate lt {(expiryDateOfInt + relativedelta(days=1)).strftime('%Y-%m-%d')}", # ge (greater than or equal to), gt (greater than), lt (less than) and le (less than or equal to). These can only be applied to numeric and date properties.
top=10000,
).get_data()
searchDf2 = response2.data.df
searchDf2
| DocumentTitle | RIC | StrikePrice | ExchangeCode | ExpiryDate | UnderlyingQuoteRIC | |
|---|---|---|---|---|---|---|
| 0 | Eurex Dow Jones EURO STOXX 50 Index Option 425... | STXE42500D3.EX | 4250 | EUX | 2023-04-21 | [.STOXX50E] |
| 1 | Eurex Dow Jones EURO STOXX 50 Index Option 420... | STXE42000D3.EX | 4200 | EUX | 2023-04-21 | [.STOXX50E] |
| 2 | Eurex Dow Jones EURO STOXX 50 Index Option 430... | STXE43000D3.EX | 4300 | EUX | 2023-04-21 | [.STOXX50E] |
| 3 | Eurex Dow Jones EURO STOXX 50 Index Option 410... | STXE41000D3.EX | 4100 | EUX | 2023-04-21 | [.STOXX50E] |
| 4 | Eurex Dow Jones EURO STOXX 50 Index Option 415... | STXE41500D3.EX | 4150 | EUX | 2023-04-21 | [.STOXX50E] |
| ... | ... | ... | ... | ... | ... | ... |
| 150 | Eurex Dow Jones EURO STOXX 50 Index Option 570... | STXE57000D3.EX | 5700 | EUX | 2023-04-21 | [.STOXX50E] |
| 151 | Eurex Dow Jones EURO STOXX 50 Index Option 100... | STXE100000D3.EX | 10000 | EUX | 2023-04-21 | [.STOXX50E] |
| 152 | Eurex Dow Jones EURO STOXX 50 Index Option 900... | STXE90000D3.EX | 9000 | EUX | 2023-04-21 | [.STOXX50E] |
| 153 | Eurex Dow Jones EURO STOXX 50 Index Option 527... | STXE52750D3.EX | 5275 | EUX | 2023-04-21 | [.STOXX50E] |
| 154 | Eurex Dow Jones EURO STOXX 50 Index Option 532... | STXE53250D3.EX | 5325 | EUX | 2023-04-21 | [.STOXX50E] |
155 rows × 6 columns
And again, we can collect the closest to ATM:
searchDf2.iloc[(searchDf2['StrikePrice']-currentUnderlyingPrc).abs().argsort()[:1]]
| DocumentTitle | RIC | StrikePrice | ExchangeCode | ExpiryDate | UnderlyingQuoteRIC | |
|---|---|---|---|---|---|---|
| 10 | Eurex Dow Jones EURO STOXX 50 Index Option 427... | STXE42750D3.EX | 4275 | EUX | 2023-04-21 | [.STOXX50E] |
Now we have our instrument:
instrument = searchDf2.iloc[(searchDf2['StrikePrice']-currentUnderlyingPrc).abs().argsort()[:1]].RIC.values[0]
instrument
'STXE42750D3.EX'
Refinitiv provides pre-calculated Implied Volatility values, but they are daily, and we will look into calculating them in higher frequencies:
## Example Options:
# instrument_1 = 'SPXv212240000.U'
# instrument_2 = 'STXE35500J2.EX' # Eurex Dow Jones EURO STOXX 50 Index Option 3550 Call Oct 2022, Stock Index Cash Option, Underlying RIC: .STOXX50E
# instrument_3 = 'SPXj212240000.U'
datetime.now().isoformat(timespec='minutes')
'2023-03-08T11:53'
start = (timeOfCalcDatetime - pd.tseries.offsets.BDay(5)).strftime('%Y-%m-%dT%H:%M:%S.%f') # '2022-10-05T07:30:00.000'
endDateTime = datetime.now()
end = endDateTime.strftime('%Y-%m-%dT%H:%M:%S.%f') # e.g.: '2022-09-09T20:00:00.000'
end
'2023-03-08T11:53:10.166926'
_RefDailyImpVolDf = historical_pricing.events.Definition(
instrument, fields=['IMP_VOLT'], count=2000).get_data()
_RefDailyImpVolDf.data.df.head()
| STXE42750D3.EX | IMP_VOLT |
|---|---|
| Timestamp | |
| 2022-12-07 00:54:54.453 | 15.4658 |
| 2022-12-08 00:54:40.323 | 15.6759 |
| 2022-12-10 00:54:49.609 | 15.3218 |
| 2022-12-22 00:53:27.211 | 15.0791 |
| 2022-12-23 00:53:27.595 | 15.0012 |
try: RefDailyImpVolDf = _RefDailyImpVolDf.data.df.drop(['EVENT_TYPE'], axis=1) # In codebook, this line is needed
except: RefDailyImpVolDf = _RefDailyImpVolDf.data.df # If outside of codebook
fig = px.line(RefDailyImpVolDf, title = RefDailyImpVolDf.columns.name + " " + RefDailyImpVolDf.columns[0]) # This is just to see the implied vol graph when that field is available
fig.show()
# rd.get_history(
# universe=["STXE35500J2.EX"],
# fields=["TRDPRC_1"],
# interval="tick")
_optnMrktPrice = rd.get_history(
universe=[instrument],
fields=["TRDPRC_1"],
interval="10min",
start=start, # Ought to always start at 4 am for OPRA exchanged Options, more info in the article below
end=end) # Ought to always end at 8 pm for OPRA exchanged Options, more info in the article below
As you can see, there isn't nessesarily a trade every 10 min.:
_optnMrktPrice.head()
| STXE42750D3.EX | TRDPRC_1 |
|---|---|
| Timestamp | |
| 2023-03-02 08:20:00 | 63.5 |
| 2023-03-02 13:00:00 | 75.6 |
| 2023-03-02 13:10:00 | 75.0 |
| 2023-03-02 13:50:00 | 72.5 |
| 2023-03-02 14:40:00 | 79.2 |
However, for the statistical inferences that we will make further in the article, when we will calculate Implied Volatilities and therefore implement the Black Scholes model, we will need 'continuous timeseries' with which to deal. There are several ways to go from discrete time series (like ours, even if we go down to tick data), but for this article, we will 1st focus on making 'buckets' of 10 min. If no trade is made in any 10 min. bucket, we will assume the price to have stayed the same as previously, throughout the exchange's trading hours which are:
thankfully this is simple. Let's stick with the EUREX for now:
optnMrktPrice = _optnMrktPrice.resample('10Min').mean() # get a datapoint every 10 min
optnMrktPrice = optnMrktPrice[optnMrktPrice.index.strftime('%Y-%m-%d').isin([i for i in _optnMrktPrice.index.strftime('%Y-%m-%d').unique()])] # Only keep trading days
optnMrktPrice = optnMrktPrice.loc[(optnMrktPrice.index.strftime('%H:%M:%S') >= '07:30:00') & (optnMrktPrice.index.strftime('%H:%M:%S') <= '22:00:00')] # Only keep trading hours
optnMrktPrice.fillna(method='ffill', inplace=True) # Forward Fill to populate NaN values
print(f"Our dataframe started at {str(optnMrktPrice.index[0])} and went on continuously till {str(optnMrktPrice.index[-1])}, so out of trading hours rows are removed")
optnMrktPrice
Our dataframe started at 2023-03-02 08:20:00 and went on continuously till 2023-03-07 16:00:00, so out of trading hours rows are removed
| STXE42750D3.EX | TRDPRC_1 |
|---|---|
| Timestamp | |
| 2023-03-02 08:20:00 | 63.5 |
| 2023-03-02 08:30:00 | 63.5 |
| 2023-03-02 08:40:00 | 63.5 |
| 2023-03-02 08:50:00 | 63.5 |
| 2023-03-02 09:00:00 | 63.5 |
| ... | ... |
| 2023-03-07 15:20:00 | 116.4 |
| 2023-03-07 15:30:00 | 116.4 |
| 2023-03-07 15:40:00 | 116.4 |
| 2023-03-07 15:50:00 | 116.4 |
| 2023-03-07 16:00:00 | 95.9 |
311 rows × 1 columns
Note that the option might not have traded in the past 10 min. This can cause issues in the code below, we thus ought to add a row for the current time:
# optnMrktPrice = optnMrktPrice.append(
# pd.DataFrame(
# [[pd.NA]], columns=optnMrktPrice.columns,
# index=[(endDateTime + (datetime.min - endDateTime) % timedelta(minutes=10))]))
# optnMrktPrice
Note also that one may want to only look at 'At Option Trade' datapoints, i.e.: Implied Volatility when a trade is made for the Option, but not when none is made. For this, we will use the 'At Trade' (AT) dataframes:
AToptnMrktPrice = _optnMrktPrice
AToptnMrktPrice
| STXE42750D3.EX | TRDPRC_1 |
|---|---|
| Timestamp | |
| 2023-03-02 08:20:00 | 63.5 |
| 2023-03-02 13:00:00 | 75.6 |
| 2023-03-02 13:10:00 | 75.0 |
| 2023-03-02 13:50:00 | 72.5 |
| 2023-03-02 14:40:00 | 79.2 |
| 2023-03-02 14:50:00 | 81.4 |
| 2023-03-02 15:00:00 | 82.7 |
| 2023-03-02 15:20:00 | 82.7 |
| 2023-03-02 15:50:00 | 83.2 |
| 2023-03-03 08:10:00 | 94.6 |
| 2023-03-03 10:00:00 | 99.5 |
| 2023-03-03 11:30:00 | 103.1 |
| 2023-03-03 11:50:00 | 102.7 |
| 2023-03-03 12:00:00 | 104.4 |
| 2023-03-03 12:30:00 | 104.4 |
| 2023-03-03 14:10:00 | 105.0 |
| 2023-03-03 15:50:00 | 106.8 |
| 2023-03-03 16:00:00 | 107.3 |
| 2023-03-03 16:10:00 | 110.7 |
| 2023-03-06 08:40:00 | 122.2 |
| 2023-03-06 09:30:00 | 118.0 |
| 2023-03-06 13:50:00 | 118.0 |
| 2023-03-06 16:20:00 | 116.4 |
| 2023-03-07 16:00:00 | 95.9 |
Now let's get data for the underying, which we need to calculate IV:
underlying = searchDf2.iloc[(searchDf2['StrikePrice']-currentUnderlyingPrc).abs().argsort()[:1]].UnderlyingQuoteRIC.values[0][0]
underlying
'.STOXX50E'
If you are interested in the opening times of any one exchange, you can use the following:
hoursDf = rd.get_data(
universe=["EUREX21"],
fields=["ROW80_10"])
display(hoursDf)
hoursDf.iloc[0,1]
| Instrument | ROW80_10 | |
|---|---|---|
| 0 | EUREX21 | OGBL/OGBM/OGBS 07:30-08:00 08:0... |
' OGBL/OGBM/OGBS 07:30-08:00 08:00-19:00 19:00-20:00 '
_underlyingMrktPrice = rd.get_history(
universe=[underlying],
fields=["TRDPRC_1"],
interval="10min",
start=start,
end=end)
_underlyingMrktPrice
| .STOXX50E | TRDPRC_1 |
|---|---|
| Timestamp | |
| 2023-03-01 12:00:00 | 4273.75 |
| 2023-03-01 12:10:00 | 4272.33 |
| 2023-03-01 12:20:00 | 4271.45 |
| 2023-03-01 12:30:00 | 4272.11 |
| 2023-03-01 12:40:00 | 4268.92 |
| ... | ... |
| 2023-03-08 10:10:00 | 4274.79 |
| 2023-03-08 10:20:00 | 4274.32 |
| 2023-03-08 10:30:00 | 4277.82 |
| 2023-03-08 10:40:00 | 4280.98 |
| 2023-03-08 10:50:00 | 4282.80 |
259 rows × 1 columns
ATunderlyingMrktPrice = AToptnMrktPrice.join(
_underlyingMrktPrice, lsuffix='_OptPr', rsuffix='_UnderlyingPr', how='inner')
ATunderlyingMrktPrice
| TRDPRC_1_OptPr | TRDPRC_1_UnderlyingPr | |
|---|---|---|
| Timestamp | ||
| 2023-03-02 08:20:00 | 63.5 | 4184.42 |
| 2023-03-02 13:00:00 | 75.6 | 4219.54 |
| 2023-03-02 13:10:00 | 75.0 | 4217.75 |
| 2023-03-02 13:50:00 | 72.5 | 4211.76 |
| 2023-03-02 14:40:00 | 79.2 | 4226.34 |
| 2023-03-02 14:50:00 | 81.4 | 4230.12 |
| 2023-03-02 15:00:00 | 82.7 | 4236.22 |
| 2023-03-02 15:20:00 | 82.7 | 4235.87 |
| 2023-03-02 15:50:00 | 83.2 | 4237.32 |
| 2023-03-03 08:10:00 | 94.6 | 4265.75 |
| 2023-03-03 10:00:00 | 99.5 | 4275.45 |
| 2023-03-03 11:30:00 | 103.1 | 4279.28 |
| 2023-03-03 11:50:00 | 102.7 | 4279.87 |
| 2023-03-03 12:00:00 | 104.4 | 4282.69 |
| 2023-03-03 12:30:00 | 104.4 | 4279.62 |
| 2023-03-03 14:10:00 | 105.0 | 4284.80 |
| 2023-03-03 15:50:00 | 106.8 | 4290.44 |
| 2023-03-03 16:00:00 | 107.3 | 4291.99 |
| 2023-03-03 16:10:00 | 110.7 | 4297.49 |
| 2023-03-06 08:40:00 | 122.2 | 4319.68 |
| 2023-03-06 09:30:00 | 118.0 | 4311.41 |
| 2023-03-06 13:50:00 | 118.0 | 4313.63 |
| 2023-03-06 16:20:00 | 116.4 | 4311.68 |
| 2023-03-07 16:00:00 | 95.9 | 4280.87 |
Let's put it all in one data-frame, df. Some datasets will have data going from the time we sort for start all the way to end. Some won't because no trade happened in the past few minutes/hours. We ought to base ourselves on the dataset with values getting closer to end and ffill for the other column. As a result, the following if loop is needed:
if optnMrktPrice.index[-1] >= _underlyingMrktPrice.index[-1]:
df = optnMrktPrice.copy()
df['underlying ' + underlying + ' TRDPRC_1'] = _underlyingMrktPrice
else:
df = _underlyingMrktPrice.copy()
df.rename(columns={"TRDPRC_1": 'underlying ' + underlying + ' TRDPRC_1'}, inplace=True)
df['TRDPRC_1'] = optnMrktPrice
df.columns.name = optnMrktPrice.columns.name
df.fillna(method='ffill', inplace=True) # Forward Fill to populate NaN values
df = df.dropna()
df
| STXE42750D3.EX | underlying .STOXX50E TRDPRC_1 | TRDPRC_1 |
|---|---|---|
| Timestamp | ||
| 2023-03-02 08:20:00 | 4184.42 | 63.5 |
| 2023-03-02 08:30:00 | 4179.43 | 63.5 |
| 2023-03-02 08:40:00 | 4194.54 | 63.5 |
| 2023-03-02 08:50:00 | 4203.87 | 63.5 |
| 2023-03-02 09:00:00 | 4204.93 | 63.5 |
| ... | ... | ... |
| 2023-03-08 10:10:00 | 4274.79 | 95.9 |
| 2023-03-08 10:20:00 | 4274.32 | 95.9 |
| 2023-03-08 10:30:00 | 4277.82 | 95.9 |
| 2023-03-08 10:40:00 | 4280.98 | 95.9 |
| 2023-03-08 10:50:00 | 4282.80 | 95.9 |
228 rows × 2 columns
strikePrice = searchDf2.iloc[
(searchDf2['StrikePrice']-currentUnderlyingPrc).abs().argsort()[:1]].StrikePrice.values[0]
strikePrice
4275
_EurRfRate = rd.get_history(
universe=['EURIBOR3MD='], # USD3MFSR=, USDSOFR=
fields=['TR.FIXINGVALUE'],
# Since we will use `dropna()` as a way to select the rows we are after later on in the code, we need to ask for more risk-free data than needed, just in case we don't have enough:
start=(datetime.strptime(start, '%Y-%m-%dT%H:%M:%S.%f') - timedelta(days=1)).strftime('%Y-%m-%d'),
end=(datetime.strptime(end, '%Y-%m-%dT%H:%M:%S.%f') + timedelta(days=1)).strftime('%Y-%m-%d'))
_EurRfRate
| EURIBOR3MD= | Fixing Value |
|---|---|
| Date | |
| 2023-02-28 | 2.744 |
| 2023-03-01 | 2.783 |
| 2023-03-02 | 2.801 |
| 2023-03-03 | 2.849 |
| 2023-03-06 | 2.875 |
| 2023-03-07 | 2.920 |
| 2023-03-08 | 2.944 |
Euribor values are released daily at 11am CET, and it is published as such on Refinitiv:
EurRfRate = _EurRfRate.resample('10Min').mean().fillna(method='ffill')
df['EurRfRate'] = EurRfRate
You might be running your code after the latest Risk Free Rate published, so the most accurate such value after taht would be the latest value, thus the use of ffill:
df = df.fillna(method='ffill')
df
| STXE42750D3.EX | underlying .STOXX50E TRDPRC_1 | TRDPRC_1 | EurRfRate |
|---|---|---|---|
| Timestamp | |||
| 2023-03-02 08:20:00 | 4184.42 | 63.5 | 2.801 |
| 2023-03-02 08:30:00 | 4179.43 | 63.5 | 2.801 |
| 2023-03-02 08:40:00 | 4194.54 | 63.5 | 2.801 |
| 2023-03-02 08:50:00 | 4203.87 | 63.5 | 2.801 |
| 2023-03-02 09:00:00 | 4204.93 | 63.5 | 2.801 |
| ... | ... | ... | ... |
| 2023-03-08 10:10:00 | 4274.79 | 95.9 | 2.920 |
| 2023-03-08 10:20:00 | 4274.32 | 95.9 | 2.920 |
| 2023-03-08 10:30:00 | 4277.82 | 95.9 | 2.920 |
| 2023-03-08 10:40:00 | 4280.98 | 95.9 | 2.920 |
| 2023-03-08 10:50:00 | 4282.80 | 95.9 | 2.920 |
228 rows × 3 columns
Now for the At Trade dataframe:
pd.options.mode.chained_assignment = None # default='warn'
ATunderlyingMrktPrice['EurRfRate'] = [pd.NA for i in ATunderlyingMrktPrice.index]
for i in _EurRfRate.index:
_i = str(i)[:10]
for n, j in enumerate(ATunderlyingMrktPrice.index):
if _i in str(j):
if len(_EurRfRate.loc[i].values) == 2:
ATunderlyingMrktPrice['EurRfRate'].iloc[n] = _EurRfRate.loc[i].values[0][0]
elif len(_EurRfRate.loc[i].values) == 1:
ATunderlyingMrktPrice['EurRfRate'].iloc[n] = _EurRfRate.loc[i].values[0]
ATdf = ATunderlyingMrktPrice.copy()
Again, you might be running your code after the latest Risk Free Rate published, so the most accurate such value after that would be the latest value, thus the use of ffill:
ATdf = ATdf.fillna(method='ffill')
ATdf
| TRDPRC_1_OptPr | TRDPRC_1_UnderlyingPr | EurRfRate | |
|---|---|---|---|
| Timestamp | |||
| 2023-03-02 08:20:00 | 63.5 | 4184.42 | 2.801 |
| 2023-03-02 13:00:00 | 75.6 | 4219.54 | 2.801 |
| 2023-03-02 13:10:00 | 75.0 | 4217.75 | 2.801 |
| 2023-03-02 13:50:00 | 72.5 | 4211.76 | 2.801 |
| 2023-03-02 14:40:00 | 79.2 | 4226.34 | 2.801 |
| 2023-03-02 14:50:00 | 81.4 | 4230.12 | 2.801 |
| 2023-03-02 15:00:00 | 82.7 | 4236.22 | 2.801 |
| 2023-03-02 15:20:00 | 82.7 | 4235.87 | 2.801 |
| 2023-03-02 15:50:00 | 83.2 | 4237.32 | 2.801 |
| 2023-03-03 08:10:00 | 94.6 | 4265.75 | 2.849 |
| 2023-03-03 10:00:00 | 99.5 | 4275.45 | 2.849 |
| 2023-03-03 11:30:00 | 103.1 | 4279.28 | 2.849 |
| 2023-03-03 11:50:00 | 102.7 | 4279.87 | 2.849 |
| 2023-03-03 12:00:00 | 104.4 | 4282.69 | 2.849 |
| 2023-03-03 12:30:00 | 104.4 | 4279.62 | 2.849 |
| 2023-03-03 14:10:00 | 105.0 | 4284.80 | 2.849 |
| 2023-03-03 15:50:00 | 106.8 | 4290.44 | 2.849 |
| 2023-03-03 16:00:00 | 107.3 | 4291.99 | 2.849 |
| 2023-03-03 16:10:00 | 110.7 | 4297.49 | 2.849 |
| 2023-03-06 08:40:00 | 122.2 | 4319.68 | 2.875 |
| 2023-03-06 09:30:00 | 118.0 | 4311.41 | 2.875 |
| 2023-03-06 13:50:00 | 118.0 | 4313.63 | 2.875 |
| 2023-03-06 16:20:00 | 116.4 | 4311.68 | 2.875 |
| 2023-03-07 16:00:00 | 95.9 | 4280.87 | 2.920 |
We are going to assume no dividends.
On the Developer Portal, one can see documentation about the Instrument Pricing Analytics service that allows access to calculating functions (that use to be called 'AdFin'). This service is accessible via several RESTful endpoints (in a family of endpoints called 'Quantitative Analytics') which can be used via RD. However, While we are going to build towards a Class that will put all our concepts together, I 1st want to showcase the several ways in which we can collect the data we're are after, for (i) all trades & (ii) at option trades only (i.e.: not every trade of the underlying) and (a) using the RD delivery layer & (b) the RD content layer:
Data returned this far was time-stamped in the GMT Time Zone, we need to re-calibrate it to the timezone of our machine:
dfGMT = df.copy()
dfLocalTimeZone = df.copy()
dfLocalTimeZone.index = [
df.index[i].replace(
tzinfo=pytz.timezone(
'GMT')).astimezone(
tz=datetime.now().astimezone().tzinfo)
for i in range(len(df))]
dfGMT
| STXE42750D3.EX | underlying .STOXX50E TRDPRC_1 | TRDPRC_1 | EurRfRate |
|---|---|---|---|
| Timestamp | |||
| 2023-03-02 08:20:00 | 4184.42 | 63.5 | 2.801 |
| 2023-03-02 08:30:00 | 4179.43 | 63.5 | 2.801 |
| 2023-03-02 08:40:00 | 4194.54 | 63.5 | 2.801 |
| 2023-03-02 08:50:00 | 4203.87 | 63.5 | 2.801 |
| 2023-03-02 09:00:00 | 4204.93 | 63.5 | 2.801 |
| ... | ... | ... | ... |
| 2023-03-08 10:10:00 | 4274.79 | 95.9 | 2.920 |
| 2023-03-08 10:20:00 | 4274.32 | 95.9 | 2.920 |
| 2023-03-08 10:30:00 | 4277.82 | 95.9 | 2.920 |
| 2023-03-08 10:40:00 | 4280.98 | 95.9 | 2.920 |
| 2023-03-08 10:50:00 | 4282.80 | 95.9 | 2.920 |
228 rows × 3 columns
dfLocalTimeZone
| STXE42750D3.EX | underlying .STOXX50E TRDPRC_1 | TRDPRC_1 | EurRfRate |
|---|---|---|---|
| 2023-03-02 09:20:00+01:00 | 4184.42 | 63.5 | 2.801 |
| 2023-03-02 09:30:00+01:00 | 4179.43 | 63.5 | 2.801 |
| 2023-03-02 09:40:00+01:00 | 4194.54 | 63.5 | 2.801 |
| 2023-03-02 09:50:00+01:00 | 4203.87 | 63.5 | 2.801 |
| 2023-03-02 10:00:00+01:00 | 4204.93 | 63.5 | 2.801 |
| ... | ... | ... | ... |
| 2023-03-08 11:10:00+01:00 | 4274.79 | 95.9 | 2.920 |
| 2023-03-08 11:20:00+01:00 | 4274.32 | 95.9 | 2.920 |
| 2023-03-08 11:30:00+01:00 | 4277.82 | 95.9 | 2.920 |
| 2023-03-08 11:40:00+01:00 | 4280.98 | 95.9 | 2.920 |
| 2023-03-08 11:50:00+01:00 | 4282.80 | 95.9 | 2.920 |
228 rows × 3 columns
requestFields = [
"MarketValueInDealCcy", "RiskFreeRatePercent",
"UnderlyingPrice", "PricingModelType",
"DividendType", "VolatilityType",
"UnderlyingTimeStamp", "ReportCcy",
"VolatilityType", "Volatility",
"DeltaPercent", "GammaPercent",
"RhoPercent", "ThetaPercent",
"VegaPercent"]
Now for the At Trade dataframe:
universeL = [
{
"instrumentType": "Option",
"instrumentDefinition": {
"buySell": "Buy",
"underlyingType": "Eti",
"instrumentCode": instrument,
"strike": str(strikePrice),
},
"pricingParameters": {
"marketValueInDealCcy": str(dfLocalTimeZone['TRDPRC_1'][i]),
"riskFreeRatePercent": str(dfLocalTimeZone['EurRfRate'][i]),
"underlyingPrice": str(dfLocalTimeZone['underlying ' + underlying + ' TRDPRC_1'][i]),
"pricingModelType": "BlackScholes",
"dividendType": "ImpliedYield",
"volatilityType": "Implied",
"underlyingTimeStamp": "Default",
"reportCcy": "EUR"
}
}
for i in range(len(dfLocalTimeZone.index))]
def Chunks(lst, n):
"""Yield successive n-sized chunks from lst."""
for i in range(0, len(lst), n):
yield lst[i:i + n]
This is the cell, next coming up below, that has a rather high chance of failing. This is because there is no error handling of any kind, just in case there are issues on the servers where we are retreiving data. The COntent Layer functions do have such error handing steps, and therefore is considerably less likely to fail or run into errors.
batchOf = 100
for i, j in enumerate(Chunks(universeL, batchOf)):
print(f"Batch of {batchOf} requests no. {str(i+1)}/{str(len([i for i in Chunks(universeL, batchOf)]))} started")
# Example request with Body Parameter - Symbology Lookup
request_definition = rd.delivery.endpoint_request.Definition(
method=rd.delivery.endpoint_request.RequestMethod.POST,
url='https://api.refinitiv.com/data/quantitative-analytics/v1/financial-contracts',
body_parameters={"fields": requestFields,
"outputs": ["Data", "Headers"],
"universe": j})
response3 = request_definition.get_data()
headers_name = [h['name'] for h in response3.data.raw['headers']]
if i == 0:
response3df = pd.DataFrame(
data=response3.data.raw['data'], columns=headers_name)
else:
_response3df = pd.DataFrame(
data=response3.data.raw['data'], columns=headers_name)
response3df = response3df.append(_response3df, ignore_index=True)
print(f"Batch of {batchOf} requests no. {str(i+1)}/{str(len([i for i in Chunks(universeL, batchOf)]))} ended")
Batch of 100 requests no. 1/3 started Batch of 100 requests no. 1/3 ended Batch of 100 requests no. 2/3 started Batch of 100 requests no. 2/3 ended Batch of 100 requests no. 3/3 started Batch of 100 requests no. 3/3 ended
response3df
| MarketValueInDealCcy | RiskFreeRatePercent | UnderlyingPrice | PricingModelType | DividendType | VolatilityType | UnderlyingTimeStamp | ReportCcy | VolatilityType | Volatility | DeltaPercent | GammaPercent | RhoPercent | ThetaPercent | VegaPercent | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 63.5 | 2.801 | 4184.42 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.418367 | 0.374475 | 0.001495 | 1.812390 | -1.097506 | 5.495620 |
| 1 | 63.5 | 2.801 | 4179.43 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.757155 | 0.369892 | 0.001462 | 1.787049 | -1.112810 | 5.467574 |
| 2 | 63.5 | 2.801 | 4194.54 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.723446 | 0.384334 | 0.001565 | 1.866813 | -1.065352 | 5.552583 |
| 3 | 63.5 | 2.801 | 4203.87 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.072575 | 0.394176 | 0.001637 | 1.921008 | -1.034258 | 5.604909 |
| 4 | 63.5 | 2.801 | 4204.93 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 15.997958 | 0.395344 | 0.001645 | 1.927434 | -1.030630 | 5.610824 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 223 | 95.9 | 2.920 | 4274.79 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.069137 | 0.513188 | 0.001667 | 2.528938 | -1.097071 | 5.899853 |
| 224 | 95.9 | 2.920 | 4274.32 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.109989 | 0.512425 | 0.001663 | 2.524720 | -1.099722 | 5.899610 |
| 225 | 95.9 | 2.920 | 4277.82 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 15.804338 | 0.518196 | 0.001693 | 2.556640 | -1.079779 | 5.900826 |
| 226 | 95.9 | 2.920 | 4280.98 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 15.525421 | 0.523601 | 0.001720 | 2.586507 | -1.061356 | 5.900667 |
| 227 | 95.9 | 2.920 | 4282.80 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 15.363433 | 0.526803 | 0.001737 | 2.604187 | -1.050558 | 5.899982 |
228 rows × 15 columns
As may (or may not) have been apparent aboe, the delivery layer does not offer any error hendling management. The server where we're requestig for data may be busy, so we may get unsuccessful messages back. You could build error handing logic yourself, but let's not reinvent the wheel when the RD Python Library exists!
dfLocalTimeZone
| STXE42750D3.EX | underlying .STOXX50E TRDPRC_1 | TRDPRC_1 | EurRfRate |
|---|---|---|---|
| 2023-03-02 09:20:00+01:00 | 4184.42 | 63.5 | 2.801 |
| 2023-03-02 09:30:00+01:00 | 4179.43 | 63.5 | 2.801 |
| 2023-03-02 09:40:00+01:00 | 4194.54 | 63.5 | 2.801 |
| 2023-03-02 09:50:00+01:00 | 4203.87 | 63.5 | 2.801 |
| 2023-03-02 10:00:00+01:00 | 4204.93 | 63.5 | 2.801 |
| ... | ... | ... | ... |
| 2023-03-08 11:10:00+01:00 | 4274.79 | 95.9 | 2.920 |
| 2023-03-08 11:20:00+01:00 | 4274.32 | 95.9 | 2.920 |
| 2023-03-08 11:30:00+01:00 | 4277.82 | 95.9 | 2.920 |
| 2023-03-08 11:40:00+01:00 | 4280.98 | 95.9 | 2.920 |
| 2023-03-08 11:50:00+01:00 | 4282.80 | 95.9 | 2.920 |
228 rows × 3 columns
CuniverseL = [ # C here is for the fact that we're using the content layer
option.Definition(
underlying_type=option.UnderlyingType.ETI,
buy_sell='Buy',
instrument_code=instrument,
strike=float(strikePrice),
pricing_parameters=option.PricingParameters(
market_value_in_deal_ccy=float(dfLocalTimeZone['TRDPRC_1'][i]),
risk_free_rate_percent=float(dfLocalTimeZone['EurRfRate'][i]),
underlying_price=float(dfLocalTimeZone[
'underlying ' + underlying + ' TRDPRC_1'][i]),
pricing_model_type='BlackScholes',
volatility_type='Implied',
underlying_time_stamp='Default',
report_ccy='EUR'))
for i in range(len(dfLocalTimeZone.index))]
batchOf = 100
for i, j in enumerate(Chunks(CuniverseL, batchOf)):
print(f"Batch of {len(j)} requests no. {i+1}/{len([i for i in Chunks(CuniverseL, 100)])} started")
# Example request with Body Parameter - Symbology Lookup
response4 = rdf.Definitions(
universe=j,
fields=requestFields)
response4 = response4.get_data()
if i == 0:
response4df = response4.data.df
else:
response4df = response4df.append(response4.data.df, ignore_index=True)
print(f"Batch of {len(j)} requests no. {i+1}/{len([i for i in Chunks(CuniverseL, 100)])} ended")
Batch of 100 requests no. 1/3 started Batch of 100 requests no. 1/3 ended Batch of 100 requests no. 2/3 started Batch of 100 requests no. 2/3 ended Batch of 28 requests no. 3/3 started Batch of 28 requests no. 3/3 ended
IPADf = response4df.copy() # IPA here stands for the service we used to get all the calculated valuse, Instrument Pricint Analitycs.
IPADf.index = dfLocalTimeZone.index
IPADf.columns.name = dfLocalTimeZone.columns.name
IPADf.rename(columns={"Volatility": 'ImpliedVolatility'}, inplace=True)
IPADf
| STXE42750D3.EX | MarketValueInDealCcy | RiskFreeRatePercent | UnderlyingPrice | PricingModelType | DividendType | VolatilityType | UnderlyingTimeStamp | ReportCcy | VolatilityType | ImpliedVolatility | DeltaPercent | GammaPercent | RhoPercent | ThetaPercent | VegaPercent |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2023-03-02 09:20:00+01:00 | 63.5 | 2.801 | 4184.42 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.418367 | 0.374475 | 0.001495 | 1.81239 | -1.097506 | 5.49562 |
| 2023-03-02 09:30:00+01:00 | 63.5 | 2.801 | 4179.43 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.757155 | 0.369892 | 0.001462 | 1.787049 | -1.11281 | 5.467574 |
| 2023-03-02 09:40:00+01:00 | 63.5 | 2.801 | 4194.54 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.723446 | 0.384334 | 0.001565 | 1.866813 | -1.065352 | 5.552583 |
| 2023-03-02 09:50:00+01:00 | 63.5 | 2.801 | 4203.87 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.072575 | 0.394176 | 0.001637 | 1.921008 | -1.034258 | 5.604909 |
| 2023-03-02 10:00:00+01:00 | 63.5 | 2.801 | 4204.93 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 15.997958 | 0.395344 | 0.001645 | 1.927434 | -1.03063 | 5.610824 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 2023-03-08 11:10:00+01:00 | 95.9 | 2.92 | 4274.79 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.069137 | 0.513188 | 0.001667 | 2.528938 | -1.097071 | 5.899853 |
| 2023-03-08 11:20:00+01:00 | 95.9 | 2.92 | 4274.32 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.109989 | 0.512425 | 0.001663 | 2.52472 | -1.099722 | 5.89961 |
| 2023-03-08 11:30:00+01:00 | 95.9 | 2.92 | 4277.82 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 15.804338 | 0.518196 | 0.001693 | 2.55664 | -1.079779 | 5.900826 |
| 2023-03-08 11:40:00+01:00 | 95.9 | 2.92 | 4280.98 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 15.525421 | 0.523601 | 0.00172 | 2.586507 | -1.061356 | 5.900667 |
| 2023-03-08 11:50:00+01:00 | 95.9 | 2.92 | 4282.8 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 15.363433 | 0.526803 | 0.001737 | 2.604187 | -1.050558 | 5.899982 |
228 rows × 15 columns
ATdfGMT = ATdf.copy()
ATdfLocalTimeZone = ATdf.copy()
ATdfLocalTimeZone.index = [
ATdf.index[i].replace(
tzinfo=pytz.timezone(
'GMT')).astimezone(
tz=datetime.now().astimezone().tzinfo)
for i in range(len(ATdf))]
ATdfGMT
| TRDPRC_1_OptPr | TRDPRC_1_UnderlyingPr | EurRfRate | |
|---|---|---|---|
| Timestamp | |||
| 2023-03-02 08:20:00 | 63.5 | 4184.42 | 2.801 |
| 2023-03-02 13:00:00 | 75.6 | 4219.54 | 2.801 |
| 2023-03-02 13:10:00 | 75.0 | 4217.75 | 2.801 |
| 2023-03-02 13:50:00 | 72.5 | 4211.76 | 2.801 |
| 2023-03-02 14:40:00 | 79.2 | 4226.34 | 2.801 |
| 2023-03-02 14:50:00 | 81.4 | 4230.12 | 2.801 |
| 2023-03-02 15:00:00 | 82.7 | 4236.22 | 2.801 |
| 2023-03-02 15:20:00 | 82.7 | 4235.87 | 2.801 |
| 2023-03-02 15:50:00 | 83.2 | 4237.32 | 2.801 |
| 2023-03-03 08:10:00 | 94.6 | 4265.75 | 2.849 |
| 2023-03-03 10:00:00 | 99.5 | 4275.45 | 2.849 |
| 2023-03-03 11:30:00 | 103.1 | 4279.28 | 2.849 |
| 2023-03-03 11:50:00 | 102.7 | 4279.87 | 2.849 |
| 2023-03-03 12:00:00 | 104.4 | 4282.69 | 2.849 |
| 2023-03-03 12:30:00 | 104.4 | 4279.62 | 2.849 |
| 2023-03-03 14:10:00 | 105.0 | 4284.80 | 2.849 |
| 2023-03-03 15:50:00 | 106.8 | 4290.44 | 2.849 |
| 2023-03-03 16:00:00 | 107.3 | 4291.99 | 2.849 |
| 2023-03-03 16:10:00 | 110.7 | 4297.49 | 2.849 |
| 2023-03-06 08:40:00 | 122.2 | 4319.68 | 2.875 |
| 2023-03-06 09:30:00 | 118.0 | 4311.41 | 2.875 |
| 2023-03-06 13:50:00 | 118.0 | 4313.63 | 2.875 |
| 2023-03-06 16:20:00 | 116.4 | 4311.68 | 2.875 |
| 2023-03-07 16:00:00 | 95.9 | 4280.87 | 2.920 |
ATdfLocalTimeZone
| TRDPRC_1_OptPr | TRDPRC_1_UnderlyingPr | EurRfRate | |
|---|---|---|---|
| 2023-03-02 09:20:00+01:00 | 63.5 | 4184.42 | 2.801 |
| 2023-03-02 14:00:00+01:00 | 75.6 | 4219.54 | 2.801 |
| 2023-03-02 14:10:00+01:00 | 75.0 | 4217.75 | 2.801 |
| 2023-03-02 14:50:00+01:00 | 72.5 | 4211.76 | 2.801 |
| 2023-03-02 15:40:00+01:00 | 79.2 | 4226.34 | 2.801 |
| 2023-03-02 15:50:00+01:00 | 81.4 | 4230.12 | 2.801 |
| 2023-03-02 16:00:00+01:00 | 82.7 | 4236.22 | 2.801 |
| 2023-03-02 16:20:00+01:00 | 82.7 | 4235.87 | 2.801 |
| 2023-03-02 16:50:00+01:00 | 83.2 | 4237.32 | 2.801 |
| 2023-03-03 09:10:00+01:00 | 94.6 | 4265.75 | 2.849 |
| 2023-03-03 11:00:00+01:00 | 99.5 | 4275.45 | 2.849 |
| 2023-03-03 12:30:00+01:00 | 103.1 | 4279.28 | 2.849 |
| 2023-03-03 12:50:00+01:00 | 102.7 | 4279.87 | 2.849 |
| 2023-03-03 13:00:00+01:00 | 104.4 | 4282.69 | 2.849 |
| 2023-03-03 13:30:00+01:00 | 104.4 | 4279.62 | 2.849 |
| 2023-03-03 15:10:00+01:00 | 105.0 | 4284.80 | 2.849 |
| 2023-03-03 16:50:00+01:00 | 106.8 | 4290.44 | 2.849 |
| 2023-03-03 17:00:00+01:00 | 107.3 | 4291.99 | 2.849 |
| 2023-03-03 17:10:00+01:00 | 110.7 | 4297.49 | 2.849 |
| 2023-03-06 09:40:00+01:00 | 122.2 | 4319.68 | 2.875 |
| 2023-03-06 10:30:00+01:00 | 118.0 | 4311.41 | 2.875 |
| 2023-03-06 14:50:00+01:00 | 118.0 | 4313.63 | 2.875 |
| 2023-03-06 17:20:00+01:00 | 116.4 | 4311.68 | 2.875 |
| 2023-03-07 17:00:00+01:00 | 95.9 | 4280.87 | 2.920 |
ATuniverseL = [
{
"instrumentType": "Option",
"instrumentDefinition": {
"buySell": "Buy",
"underlyingType": "Eti",
"instrumentCode": instrument,
"strike": str(strikePrice),
},
"pricingParameters": {
"marketValueInDealCcy": str(ATdfLocalTimeZone['TRDPRC_1_OptPr'][i]),
"riskFreeRatePercent": str(ATdfLocalTimeZone['EurRfRate'][i]),
"underlyingPrice": str(ATdfLocalTimeZone['TRDPRC_1_UnderlyingPr'][i]),
"pricingModelType": "BlackScholes",
"dividendType": "ImpliedYield",
"volatilityType": "Implied",
"underlyingTimeStamp": "Default",
"reportCcy": "EUR"
}
}
for i in range(len(ATdfLocalTimeZone.index))]
ATCUniverseL = [ # C here is for the fact that we're using the content layer
option.Definition(
underlying_type=option.UnderlyingType.ETI,
buy_sell='Buy',
instrument_code=instrument,
strike=float(strikePrice),
pricing_parameters=option.PricingParameters(
market_value_in_deal_ccy=float(ATdfLocalTimeZone['TRDPRC_1_OptPr'][i]),
risk_free_rate_percent=float(ATdfLocalTimeZone['EurRfRate'][i]),
underlying_price=float(ATdfLocalTimeZone['TRDPRC_1_UnderlyingPr'][i]),
pricing_model_type='BlackScholes',
volatility_type='Implied',
underlying_time_stamp='Default',
report_ccy='EUR'))
for i in range(len(ATdfLocalTimeZone.index))]
batchOf = 100
for i, j in enumerate(Chunks(ATCUniverseL, batchOf)):
print(f"Batch of {len(j)} requests no. {i+1}/{len([i for i in Chunks(ATCUniverseL, batchOf)])} started")
# Example request with Body Parameter - Symbology Lookup
response5 = rdf.Definitions(
universe=j,
fields=requestFields)
response5 = response5.get_data()
if i == 0:
response5df = response5.data.df
else:
response5df = response5df.append(response5.data.df, ignore_index=True)
print(f"Batch of {len(j)} requests no. {i+1}/{len([i for i in Chunks(ATCUniverseL, batchOf)])} ended")
Batch of 24 requests no. 1/1 started Batch of 24 requests no. 1/1 ended
response5df
| MarketValueInDealCcy | RiskFreeRatePercent | UnderlyingPrice | PricingModelType | DividendType | VolatilityType | UnderlyingTimeStamp | ReportCcy | VolatilityType | Volatility | DeltaPercent | GammaPercent | RhoPercent | ThetaPercent | VegaPercent | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 63.5 | 2.801 | 4184.42 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.418367 | 0.374475 | 0.001495 | 1.81239 | -1.097506 | 5.49562 |
| 1 | 75.6 | 2.801 | 4219.54 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.072497 | 0.425851 | 0.001564 | 2.074984 | -1.122589 | 5.729974 |
| 2 | 75.0 | 2.801 | 4217.75 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.100426 | 0.423212 | 0.00156 | 2.061375 | -1.122466 | 5.720313 |
| 3 | 72.5 | 2.801 | 4211.76 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.101667 | 0.413892 | 0.001554 | 2.014011 | -1.115406 | 5.684593 |
| 4 | 79.2 | 2.801 | 4226.34 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.188349 | 0.437073 | 0.001558 | 2.131311 | -1.137581 | 5.767112 |
| 5 | 81.4 | 2.801 | 4230.12 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.281122 | 0.44339 | 0.001552 | 2.162863 | -1.147451 | 5.78592 |
| 6 | 82.7 | 2.801 | 4236.22 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.034427 | 0.451867 | 0.001577 | 2.207845 | -1.136203 | 5.810265 |
| 7 | 82.7 | 2.801 | 4235.87 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.061637 | 0.451427 | 0.001574 | 2.20541 | -1.13774 | 5.809021 |
| 8 | 83.2 | 2.801 | 4237.32 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.03477 | 0.453603 | 0.001577 | 2.216709 | -1.137124 | 5.814722 |
| 9 | 94.6 | 2.849 | 4265.75 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.653083 | 0.498333 | 0.001613 | 2.448527 | -1.130032 | 5.89137 |
| 10 | 99.5 | 2.849 | 4275.45 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.651426 | 0.513955 | 0.001608 | 2.52896 | -1.131996 | 5.900333 |
| 11 | 103.1 | 2.849 | 4279.28 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.925914 | 0.520135 | 0.001579 | 2.558874 | -1.150618 | 5.901346 |
| 12 | 102.7 | 2.849 | 4279.87 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.80609 | 0.521061 | 0.00159 | 2.564505 | -1.142668 | 5.901396 |
| 13 | 104.4 | 2.849 | 4282.69 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.844116 | 0.525534 | 0.001585 | 2.58732 | -1.145256 | 5.901142 |
| 14 | 104.4 | 2.849 | 4279.62 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.11622 | 0.520687 | 0.001562 | 2.560368 | -1.16331 | 5.901364 |
| 15 | 105.0 | 2.849 | 4284.8 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.757288 | 0.528913 | 0.001591 | 2.605384 | -1.139414 | 5.900422 |
| 16 | 106.8 | 2.849 | 4290.44 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.552382 | 0.538073 | 0.001605 | 2.654189 | -1.125186 | 5.896183 |
| 17 | 107.3 | 2.849 | 4291.99 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.495395 | 0.540626 | 0.001609 | 2.667801 | -1.12112 | 5.894403 |
| 18 | 110.7 | 2.849 | 4297.49 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.563742 | 0.549344 | 0.001596 | 2.712451 | -1.124419 | 5.886734 |
| 19 | 122.2 | 2.875 | 4319.68 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.365666 | 0.58539 | 0.001582 | 2.900982 | -1.101885 | 5.823085 |
| 20 | 118.0 | 2.875 | 4311.41 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.465957 | 0.571941 | 0.001586 | 2.830313 | -1.113695 | 5.852768 |
| 21 | 118.0 | 2.875 | 4313.63 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.248047 | 0.57618 | 0.001603 | 2.853885 | -1.09774 | 5.843655 |
| 22 | 116.4 | 2.875 | 4311.68 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.166077 | 0.573312 | 0.001615 | 2.839553 | -1.093304 | 5.84929 |
| 23 | 95.9 | 2.92 | 4280.87 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 15.535179 | 0.52341 | 0.001719 | 2.58545 | -1.062005 | 5.900694 |
ATIPADf = response5df.copy() # IPA here stands for the service we used to get all the calculated valuse, Instrument Pricint Analitycs.
ATIPADf.index = ATdfLocalTimeZone.index
ATIPADf.columns.name = ATdfLocalTimeZone.columns.name
ATIPADf.rename(columns={"Volatility": 'ImpliedVolatility'}, inplace=True)
ATIPADf
| MarketValueInDealCcy | RiskFreeRatePercent | UnderlyingPrice | PricingModelType | DividendType | VolatilityType | UnderlyingTimeStamp | ReportCcy | VolatilityType | ImpliedVolatility | DeltaPercent | GammaPercent | RhoPercent | ThetaPercent | VegaPercent | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2023-03-02 09:20:00+01:00 | 63.5 | 2.801 | 4184.42 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.418367 | 0.374475 | 0.001495 | 1.81239 | -1.097506 | 5.49562 |
| 2023-03-02 14:00:00+01:00 | 75.6 | 2.801 | 4219.54 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.072497 | 0.425851 | 0.001564 | 2.074984 | -1.122589 | 5.729974 |
| 2023-03-02 14:10:00+01:00 | 75.0 | 2.801 | 4217.75 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.100426 | 0.423212 | 0.00156 | 2.061375 | -1.122466 | 5.720313 |
| 2023-03-02 14:50:00+01:00 | 72.5 | 2.801 | 4211.76 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.101667 | 0.413892 | 0.001554 | 2.014011 | -1.115406 | 5.684593 |
| 2023-03-02 15:40:00+01:00 | 79.2 | 2.801 | 4226.34 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.188349 | 0.437073 | 0.001558 | 2.131311 | -1.137581 | 5.767112 |
| 2023-03-02 15:50:00+01:00 | 81.4 | 2.801 | 4230.12 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.281122 | 0.44339 | 0.001552 | 2.162863 | -1.147451 | 5.78592 |
| 2023-03-02 16:00:00+01:00 | 82.7 | 2.801 | 4236.22 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.034427 | 0.451867 | 0.001577 | 2.207845 | -1.136203 | 5.810265 |
| 2023-03-02 16:20:00+01:00 | 82.7 | 2.801 | 4235.87 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.061637 | 0.451427 | 0.001574 | 2.20541 | -1.13774 | 5.809021 |
| 2023-03-02 16:50:00+01:00 | 83.2 | 2.801 | 4237.32 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.03477 | 0.453603 | 0.001577 | 2.216709 | -1.137124 | 5.814722 |
| 2023-03-03 09:10:00+01:00 | 94.6 | 2.849 | 4265.75 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.653083 | 0.498333 | 0.001613 | 2.448527 | -1.130032 | 5.89137 |
| 2023-03-03 11:00:00+01:00 | 99.5 | 2.849 | 4275.45 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.651426 | 0.513955 | 0.001608 | 2.52896 | -1.131996 | 5.900333 |
| 2023-03-03 12:30:00+01:00 | 103.1 | 2.849 | 4279.28 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.925914 | 0.520135 | 0.001579 | 2.558874 | -1.150618 | 5.901346 |
| 2023-03-03 12:50:00+01:00 | 102.7 | 2.849 | 4279.87 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.80609 | 0.521061 | 0.00159 | 2.564505 | -1.142668 | 5.901396 |
| 2023-03-03 13:00:00+01:00 | 104.4 | 2.849 | 4282.69 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.844116 | 0.525534 | 0.001585 | 2.58732 | -1.145256 | 5.901142 |
| 2023-03-03 13:30:00+01:00 | 104.4 | 2.849 | 4279.62 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 17.11622 | 0.520687 | 0.001562 | 2.560368 | -1.16331 | 5.901364 |
| 2023-03-03 15:10:00+01:00 | 105.0 | 2.849 | 4284.8 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.757288 | 0.528913 | 0.001591 | 2.605384 | -1.139414 | 5.900422 |
| 2023-03-03 16:50:00+01:00 | 106.8 | 2.849 | 4290.44 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.552382 | 0.538073 | 0.001605 | 2.654189 | -1.125186 | 5.896183 |
| 2023-03-03 17:00:00+01:00 | 107.3 | 2.849 | 4291.99 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.495395 | 0.540626 | 0.001609 | 2.667801 | -1.12112 | 5.894403 |
| 2023-03-03 17:10:00+01:00 | 110.7 | 2.849 | 4297.49 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.563742 | 0.549344 | 0.001596 | 2.712451 | -1.124419 | 5.886734 |
| 2023-03-06 09:40:00+01:00 | 122.2 | 2.875 | 4319.68 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.365666 | 0.58539 | 0.001582 | 2.900982 | -1.101885 | 5.823085 |
| 2023-03-06 10:30:00+01:00 | 118.0 | 2.875 | 4311.41 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.465957 | 0.571941 | 0.001586 | 2.830313 | -1.113695 | 5.852768 |
| 2023-03-06 14:50:00+01:00 | 118.0 | 2.875 | 4313.63 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.248047 | 0.57618 | 0.001603 | 2.853885 | -1.09774 | 5.843655 |
| 2023-03-06 17:20:00+01:00 | 116.4 | 2.875 | 4311.68 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 16.166077 | 0.573312 | 0.001615 | 2.839553 | -1.093304 | 5.84929 |
| 2023-03-07 17:00:00+01:00 | 95.9 | 2.92 | 4280.87 | BlackScholes | ImpliedYield | Calculated | Default | EUR | Calculated | 15.535179 | 0.52341 | 0.001719 | 2.58545 | -1.062005 | 5.900694 |
From now on we will not show AT dataframe equivalents because it is... equivalent!
display(searchDf2.iloc[
(searchDf2['StrikePrice']-currentUnderlyingPrc).abs().argsort()[:1]])
IPADfGraph = IPADf[['ImpliedVolatility', 'MarketValueInDealCcy',
'RiskFreeRatePercent', 'UnderlyingPrice', 'DeltaPercent',
'GammaPercent', 'RhoPercent', 'ThetaPercent', 'VegaPercent']]
fig = px.line(IPADfGraph) # This is just to see the implied vol graph when that field is available
# fig.layout = dict(xaxis=dict(type="category"))
# Format Graph: https://plotly.com/python/tick-formatting/
fig.update_layout(
title=instrument,
template='plotly_dark')
# Make it so that only one line is shown by default: # https://stackoverflow.com/questions/73384807/plotly-express-plot-subset-of-dataframe-columns-by-default-and-the-rest-as-opt
fig.for_each_trace(
lambda t: t.update(
visible=True if t.name in IPADfGraph.columns[:1] else "legendonly"))
# fig.update_xaxes(autorange=True)
# fig.update_layout(yaxis=IPADf.index[0::10])
fig.show()
| DocumentTitle | RIC | StrikePrice | ExchangeCode | ExpiryDate | UnderlyingQuoteRIC | |
|---|---|---|---|---|---|---|
| 10 | Eurex Dow Jones EURO STOXX 50 Index Option 427... | STXE42750D3.EX | 4275 | EUX | 2023-04-21 | [.STOXX50E] |
This representation will allow us to see several graphs at different scales stacked above one another. This way, we can see if the change in Implied Volatility is caused by a movement in the underlying or the Option price itself:
fig = subplots.make_subplots(rows=3, cols=1)
fig.add_trace(go.Scatter(x=IPADf.index, y=IPADf.ImpliedVolatility, name='Op Imp Volatility'), row=1, col=1)
fig.add_trace(go.Scatter(x=IPADf.index, y=IPADf.MarketValueInDealCcy, name='Op Mk Pr'), row=2, col=1)
fig.add_trace(go.Scatter(x=IPADf.index, y=IPADf.UnderlyingPrice, name=underlying + ' Undrlyg Pr'), row=3, col=1)
fig.update(layout_xaxis_rangeslider_visible=False)
fig.update_layout(title=IPADf.columns.name)
fig.update_layout(
template='plotly_dark',
autosize=False,
width=1300,
height=500)
fig.show()
searchDf2.iloc[(searchDf2['StrikePrice']-currentUnderlyingPrc).abs().argsort()[:1]]
| DocumentTitle | RIC | StrikePrice | ExchangeCode | ExpiryDate | UnderlyingQuoteRIC | |
|---|---|---|---|---|---|---|
| 10 | Eurex Dow Jones EURO STOXX 50 Index Option 427... | STXE42750D3.EX | 4275 | EUX | 2023-04-21 | [.STOXX50E] |
Certain companies are slow to update libraries, dependencies or Python versions. They/You may thus not have access to plotly (the graph library we used above). Matplotlib is rather light and should work, even on machines with old setups:
display(searchDf2.iloc[(searchDf2.StrikePrice-currentUnderlyingPrc).abs().argsort()[:1]])
ATIPADfSimpleGraph = pd.DataFrame(
data=ATIPADf.ImpliedVolatility.values, index=ATIPADf.ImpliedVolatility.index)
fig, ax = plt.subplots(ncols=1)
ax.plot(ATIPADfSimpleGraph, '.-')
# ax.xaxis.set_major_formatter(ticker.FuncFormatter(format_date))
ax.set_title(f"{searchDf2.iloc[(searchDf2.StrikePrice-currentUnderlyingPrc).abs().argsort()[:1]].RIC.values[0]} Implied Volatility At Trade Only")
fig.autofmt_xdate()
plt.show()
| DocumentTitle | RIC | StrikePrice | ExchangeCode | ExpiryDate | UnderlyingQuoteRIC | |
|---|---|---|---|---|---|---|
| 10 | Eurex Dow Jones EURO STOXX 50 Index Option 427... | STXE42750D3.EX | 4275 | EUX | 2023-04-21 | [.STOXX50E] |
Note here that we are looking only 'At Trade', i.e.: times when the option traded, not the underlying. There are therefore fewer datapoints.
Let's put it all together into a single function. This ImpVolatilityCalcIPA function will allow anyone to:
(I) find the option (i) with the index of your choice (SPX or EUREX) as underlying, (ii) closest to strike price right now (i.e.: At The Money) and (iii) with the next, closest expiry date past x days after today,
(II) calculate the Implied Volatility for that option either (i) only at times when the option itself is traded or (ii) at any time the option or the underlying is being traded.
def ImpVolatilityCalcIPA(x=15,
instrument=None,
indexUnderlying=".STOXX50E",
callOrPut='Put',
dateBack=3,
expiryYearOfInterest=datetime.now().year,
riskFreeRate=None, riskFreeRateField=None,
timeZoneInGraph=datetime.now().astimezone(),
maxColwidth=200,
graphStyle='overlay', # 'overlay', '3 graphs', 'simple'
simpleGraphLineStyle='.-', # 'o-'
simpleGraphSize=(15, 5),
graphTemplate='plotly_dark',
debug=False,
returnDfGraph=False,
AtOptionTradeOnly=False):
if indexUnderlying == ".STOXX50E":
exchangeC, exchangeRIC, mcalGetCalendar = 'EUX', 'STX', 'EUREX'
elif indexUnderlying == '.SPX':
exchangeC, exchangeRIC, mcalGetCalendar = 'OPQ', 'SPX', 'CBOE_Futures'# 'CBOE_Index_Options' # should be 'CBOE_Index_Options'... CBOT_Equity
def Get_exp_dates(year=expiryYearOfInterest,
days=True,
mcal_get_calendar=mcalGetCalendar):
'''
Get_exp_dates Version 3.0:
This function gets expiration dates for a year for NDX options, which are the 3rd Fridays of each month.
Changes
----------------------------------------------
Changed from Version 1.0 to 2.0: Jonathan Legrand changed Haykaz Aramyan's original code to allow
(i) for the function's holiday argument to be changed, and defaulted to 'EUREX' as opposed to 'CBOE_Index_Options' and
(ii) for the function to output full date objects as opposed to just days of the month if agument days=True.
Changed from Version 2.0 to 3.0: Jonathan Legrand changed this function to reflec tthe fact that it can be used for indexes other than EUREX.
Dependencies
----------------------------------------------
Python library 'pandas_market_calendars' version 3.2
Parameters
-----------------------------------------------
Input:
year(int): year for which expiration days are requested
mcal_get_calendar(str): String of the calendar for which holidays have to be taken into account. More on this calendar (link to Github chacked 2022-10-11): https://github.com/rsheftel/pandas_market_calendars/blob/177e7922c7df5ad249b0d066b5c9e730a3ee8596/pandas_market_calendars/exchange_calendar_cboe.py
Default: mcal_get_calendar='EUREX'
days(bool): If True, only days of the month is outputed, else it's dataeime objects
Default: days=True
Output:
dates(dict): dictionary of expiration days for each month of a specified year in datetime.date format.
'''
# get CBOE market holidays
Cal = mcal.get_calendar(mcal_get_calendar)
holidays = Cal.holidays().holidays
# set calendar starting from Saturday
c = calendar.Calendar(firstweekday=calendar.SATURDAY)
# get the 3rd Friday of each month
exp_dates = {}
for i in range(1, 13):
monthcal = c.monthdatescalendar(year, i)
date = monthcal[2][-1]
# check if found date is an holiday and get the previous date if it is
if date in holidays:
date = date + timedelta(-1)
# append the date to the dictionary
if year in exp_dates:
### Changed from original code from here on by Jonathan Legrand on 2022-10-11
if days: exp_dates[year].append(date.day)
else: exp_dates[year].append(date)
else:
if days: exp_dates[year] = [date.day]
else: exp_dates[year] = [date]
return exp_dates
timeOfCalcDatetime = datetime.now() # For now, we will focuss on the use-case where we are calculating values for today; later we will allow for it historically for any day going back a few business days.
timeOfCalcStr = datetime.now().strftime('%Y-%m-%d')
fullDatesAtTimeOfCalc = Get_exp_dates(timeOfCalcDatetime.year, days=False) # `timeOfCalcDatetime.year` here is 2023
fullDatesAtTimeOfCalcDatetime = [
datetime(i.year, i.month, i.day)
for i in fullDatesAtTimeOfCalc[list(fullDatesAtTimeOfCalc.keys())[0]]]
expiryDateOfInt = [i for i in fullDatesAtTimeOfCalcDatetime
if i > timeOfCalcDatetime + relativedelta(days=x)][0]
if debug: print(f"expiryDateOfInt: {expiryDateOfInt}")
response = search.Definition(
view = search.Views.SEARCH_ALL, # To see what views are available: `help(search.Views)` & `search.metadata.Definition(view = search.Views.SEARCH_ALL).get_data().data.df.to_excel("SEARCH_ALL.xlsx")`
query=indexUnderlying,
select="DocumentTitle, RIC, StrikePrice, ExchangeCode, ExpiryDate, UnderlyingAsset, " +
"UnderlyingAssetName, UnderlyingAssetRIC, ESMAUnderlyingIndexCode, RCSUnderlyingMarket" +
"UnderlyingQuoteName, UnderlyingQuoteRIC",
filter=f"RCSAssetCategoryLeaf eq 'Option' and RIC eq '{exchangeRIC}*' and DocumentTitle ne '*Weekly*' " +
f"and CallPutOption eq '{callOrPut}' and ExchangeCode eq '{exchangeC}' and " +
f"ExpiryDate ge {(expiryDateOfInt - relativedelta(days=1)).strftime('%Y-%m-%d')} " +
f"and ExpiryDate lt {(expiryDateOfInt + relativedelta(days=1)).strftime('%Y-%m-%d')}", # ge (greater than or equal to), gt (greater than), lt (less than) and le (less than or equal to). These can only be applied to numeric and date properties.
top=10000,
).get_data()
searchDf = response.data.df
if debug: display(searchDf)
try:
underlyingPrice = rd.get_history(
universe=[indexUnderlying],
fields=["TRDPRC_1"],
interval="tick").iloc[-1][0]
except:
print("Function failed at the search strage, returning the following dataframe: ")
display(searchDf)
if debug:
print(f"Underlying {indexUnderlying}'s price recoprded here was {underlyingPrice}")
display(searchDf.iloc[(searchDf['StrikePrice']-underlyingPrice).abs().argsort()[:10]])
if instrument is None:
instrument = searchDf.iloc[(searchDf['StrikePrice']-underlyingPrice).abs().argsort()[:1]].RIC.values[0]
start = (timeOfCalcDatetime - pd.tseries.offsets.BDay(dateBack)).strftime('%Y-%m-%dT%H:%M:%S.%f') # '2022-10-05T07:30:00.000'
endDateTime = datetime.now()
end = endDateTime.strftime('%Y-%m-%dT%H:%M:%S.%f') # e.g.: '2022-09-09T20:00:00.000'
_optnMrktPrice = rd.get_history(
universe=[instrument],
fields=["TRDPRC_1"],
interval="10min",
start=start, # Ought to always start at 4 am for OPRA exchanged Options, more info in the article below
end=end) # Ought to always end at 8 pm for OPRA exchanged Options, more info in the article below
if debug:
print(instrument)
display(_optnMrktPrice)
## Data on certain options are stale and do not nessesarily show up on Workspace, in case that happens, we will pick the next ATM Option, which probably will have the same strike, but we will only do so once, any more and we could get too far from strike:
if _optnMrktPrice.empty:
if debug: print(f"No data could be found for {instrument}, so the next ATM Option was chosen")
instrument = searchDf.iloc[(searchDf['StrikePrice']-underlyingPrice).abs().argsort()[1:2]].RIC.values[0]
if debug: print(f"{instrument}")
_optnMrktPrice = rd.get_history(universe=[instrument],
fields=["TRDPRC_1"], interval="10min",
start=start, end=end)
if debug: display(_optnMrktPrice)
if _optnMrktPrice.empty: # Let's try one more time, as is often nessesary
if debug: print(f"No data could be found for {instrument}, so the next ATM Option was chosen")
instrument = searchDf.iloc[(searchDf['StrikePrice']-underlyingPrice).abs().argsort()[2:3]].RIC.values[0]
if debug: print(f"{instrument}")
_optnMrktPrice = rd.get_history(universe=[instrument],
fields=["TRDPRC_1"], interval="10min",
start=start, end=end)
if debug: display(_optnMrktPrice)
if _optnMrktPrice.empty:
print(f"No data could be found for {instrument}, please check it on Refinitiv Workspace")
optnMrktPrice = _optnMrktPrice.resample('10Min').mean() # get a datapoint every 10 min
optnMrktPrice = optnMrktPrice[optnMrktPrice.index.strftime('%Y-%m-%d').isin([i for i in _optnMrktPrice.index.strftime('%Y-%m-%d').unique()])] # Only keep trading days
optnMrktPrice = optnMrktPrice.loc[(optnMrktPrice.index.strftime('%H:%M:%S') >= '07:30:00') & (optnMrktPrice.index.strftime('%H:%M:%S') <= '22:00:00')] # Only keep trading hours
optnMrktPrice.fillna(method='ffill', inplace=True) # Forward Fill to populate NaN values
# Note also that one may want to only look at 'At Option Trade' datapoints,
# i.e.: Implied Volatility when a trade is made for the Option, but not when
# none is made. For this, we will use the 'At Trade' (`AT`) dataframes:
if AtOptionTradeOnly: AToptnMrktPrice = _optnMrktPrice
underlying = searchDf.iloc[(searchDf.StrikePrice).abs().argsort()[:1]].UnderlyingQuoteRIC.values[0][0]
_underlyingMrktPrice = rd.get_history(
universe=[underlying],
fields=["TRDPRC_1"],
interval="10min",
start=start,
end=end)
# Let's put it al in one data-frame, `df`. Some datasets will have data
# going from the time we sert for `start` all the way to `end`. Some won't
# because no trade happened in the past few minutes/hours. We ought to base
# ourselves on the dataset with values getting closer to `end` and `ffill`
# for the other column. As a result, the following `if` loop is needed:
if optnMrktPrice.index[-1] >= _underlyingMrktPrice.index[-1]:
df = optnMrktPrice.copy()
df[f"underlying {underlying} TRDPRC_1"] = _underlyingMrktPrice
else:
df = _underlyingMrktPrice.copy()
df.rename(
columns={"TRDPRC_1": f"underlying {underlying} TRDPRC_1"},
inplace=True)
df['TRDPRC_1'] = optnMrktPrice
df.columns.name = optnMrktPrice.columns.name
df.fillna(method='ffill', inplace=True) # Forward Fill to populate NaN values
df = df.dropna()
if AtOptionTradeOnly:
ATunderlyingMrktPrice = AToptnMrktPrice.join(
_underlyingMrktPrice, lsuffix='_OptPr', how='inner',
rsuffix=f" Underlying {underlying} TRDPRC_1")
ATdf = ATunderlyingMrktPrice
strikePrice = searchDf.iloc[(searchDf['StrikePrice']-underlyingPrice).abs().argsort()[:1]].StrikePrice.values[0]
if riskFreeRate is None and indexUnderlying == ".SPX":
_riskFreeRate = 'USDCFCFCTSA3M='
_riskFreeRateField = 'TR.FIXINGVALUE'
elif riskFreeRate is None and indexUnderlying == ".STOXX50E":
_riskFreeRate = 'EURIBOR3MD='
_riskFreeRateField = 'TR.FIXINGVALUE'
else:
_riskFreeRate, _riskFreeRateField = riskFreeRate, riskFreeRateField
_RfRate = rd.get_history(
universe=[_riskFreeRate], # USD3MFSR=, USDSOFR=
fields=[_riskFreeRateField],
# Since we will use `dropna()` as a way to select the rows we are after later on in the code, we need to ask for more risk-free data than needed, just in case we don't have enough:
start=(datetime.strptime(start, '%Y-%m-%dT%H:%M:%S.%f') - timedelta(days=1)).strftime('%Y-%m-%d'),
end=(datetime.strptime(end, '%Y-%m-%dT%H:%M:%S.%f') + timedelta(days=1)).strftime('%Y-%m-%d'))
RfRate = _RfRate.resample('10Min').mean().fillna(method='ffill')
if AtOptionTradeOnly:
pd.options.mode.chained_assignment = None # default='warn'
ATunderlyingMrktPrice['RfRate'] = [pd.NA for i in ATunderlyingMrktPrice.index]
for i in RfRate.index:
_i = str(i)[:10]
for n, j in enumerate(ATunderlyingMrktPrice.index):
if _i in str(j):
if len(RfRate.loc[i].values) == 2:
ATunderlyingMrktPrice['RfRate'].iloc[n] = RfRate.loc[i].values[0][0]
elif len(RfRate.loc[i].values) == 1:
ATunderlyingMrktPrice['RfRate'].iloc[n] = RfRate.loc[i].values[0]
ATdf = ATunderlyingMrktPrice.copy()
ATdf = ATdf.fillna(method='ffill') # This is in case there were no Risk Free datapoints released after a certain time, but trades on the option still went through.
else:
df['RfRate'] = RfRate
df = df.fillna(method='ffill')
if timeZoneInGraph != 'GMT' and AtOptionTradeOnly:
ATdf.index = [
ATdf.index[i].replace(
tzinfo=pytz.timezone(
'GMT')).astimezone(
tz=datetime.now().astimezone().tzinfo)
for i in range(len(ATdf))]
elif timeZoneInGraph != 'GMT':
df.index = [
df.index[i].replace(
tzinfo=pytz.timezone(
'GMT')).astimezone(
tz=timeZoneInGraph.tzinfo)
for i in range(len(df))]
if AtOptionTradeOnly:
if debug:
print("ATdf")
display(ATdf)
universeL = [
option.Definition(
underlying_type=option.UnderlyingType.ETI,
buy_sell='Buy',
instrument_code=instrument,
strike=float(strikePrice),
pricing_parameters=option.PricingParameters(
market_value_in_deal_ccy=float(ATdf.TRDPRC_1_OptPr[i]),
risk_free_rate_percent=float(ATdf.RfRate[i]),
underlying_price=float(ATdf[
f"TRDPRC_1 Underlying {underlying} TRDPRC_1"][i]),
pricing_model_type='BlackScholes',
volatility_type='Implied',
underlying_time_stamp='Default',
report_ccy='EUR'))
for i in range(len(ATdf.index))]
else:
if debug:
print("df")
display(df)
universeL = [ # C here is for the fact that we're using the content layer
option.Definition(
underlying_type=option.UnderlyingType.ETI,
buy_sell='Buy',
instrument_code=instrument,
strike=float(strikePrice),
pricing_parameters=option.PricingParameters(
market_value_in_deal_ccy=float(df.TRDPRC_1[i]),
risk_free_rate_percent=float(df.RfRate[i]),
underlying_price=float(df[
f"underlying {underlying} TRDPRC_1"][i]),
pricing_model_type='BlackScholes',
volatility_type='Implied',
underlying_time_stamp='Default',
report_ccy='EUR'))
for i in range(len(df.index))]
def Chunks(lst, n):
"""Yield successive n-sized chunks from lst."""
for i in range(0, len(lst), n):
yield lst[i:i + n]
requestFields = [
"MarketValueInDealCcy", "RiskFreeRatePercent",
"UnderlyingPrice", "PricingModelType",
"DividendType", "VolatilityType",
"UnderlyingTimeStamp", "ReportCcy",
"VolatilityType", "Volatility",
"DeltaPercent", "GammaPercent",
"RhoPercent", "ThetaPercent", "VegaPercent"]
batchOf = 100
for i, j in enumerate(Chunks(universeL, batchOf)):
if debug: print(f"Batch of {len(j)} requests no. {i+1}/{len([i for i in Chunks(universeL, batchOf)])} started")
# Example request with Body Parameter - Symbology Lookup
request_definition = rdf.Definitions(universe=j, fields=requestFields)
response = request_definition.get_data()
if i == 0:
IPADf = response.data.df
else:
IPADf = IPADf.append(response.data.df, ignore_index=True)
if debug: print(f"Batch of {len(j)} requests no. {i+1}/{len([i for i in Chunks(universeL, batchOf)])} ended")
if AtOptionTradeOnly:
IPADf.index = ATdf.index
IPADf.columns.name = ATdf.columns.name
else:
IPADf.index = df.index
IPADf.columns.name = df.columns.name
IPADf.rename(columns={"Volatility": 'ImpliedVolatility'}, inplace=True)
# We are going to want to show details about data retreived in a dataframe in the output of this function. The one line below allows us to maximise the width (column) length of cells to see all that is written within them.
pd.options.display.max_colwidth = maxColwidth
if graphStyle == 'simple':
display(searchDf.iloc[(searchDf.StrikePrice-underlyingPrice).abs().argsort()[:1]])
fig, axes = plt.subplots(ncols=1, figsize=simpleGraphSize)
axes.plot(
pd.DataFrame( # Unfortunutally, Matplotlib, which is the library used here for simple graphs, require our dataframe to be in a specific format that necessitate the use of `pd.DataFrame`
data=IPADf[['ImpliedVolatility']].ImpliedVolatility.values,
index=IPADf[['ImpliedVolatility']].ImpliedVolatility.index),
simpleGraphLineStyle)
if AtOptionTradeOnly: axes.set_title(f"{instrument} Implied Volatility At Trade Only")
else: axes.set_title(f"{instrument} Implied Volatility")
plt.show()
else:
display(searchDf.iloc[(searchDf['StrikePrice']-underlyingPrice).abs().argsort()[:1]])
IPADfGraph = IPADf[
['ImpliedVolatility', 'MarketValueInDealCcy',
'RiskFreeRatePercent', 'UnderlyingPrice', 'DeltaPercent',
'GammaPercent', 'RhoPercent', 'ThetaPercent', 'VegaPercent']]
if debug: display(IPADfGraph)
try: # This is needed in case there is not enough data to calculate values for all timestamps , see https://stackoverflow.com/questions/67244912/wide-format-csv-with-plotly-express
fig = px.line(IPADfGraph)
except:
if returnDfGraph:
return IPADfGraph
else:
IPADfGraph = IPADfGraph[
["ImpliedVolatility", "MarketValueInDealCcy",
"RiskFreeRatePercent", "UnderlyingPrice"]]
fig = px.line(IPADfGraph)
if graphStyle == 'overlay':
fig.update_layout(
title=instrument,
template=graphTemplate)
fig.for_each_trace(
lambda t: t.update(
visible=True if t.name in IPADfGraph.columns[:1] else "legendonly"))
fig.show()
elif graphStyle == '3 graphs':
fig = plotly.subplots.make_subplots(rows=3, cols=1)
fig.add_trace(go.Scatter(
x=IPADf.index, y=IPADfGraph.ImpliedVolatility,
name='Op Imp Volatility'), row=1, col=1)
fig.add_trace(go.Scatter(
x=IPADf.index, y=IPADfGraph.MarketValueInDealCcy,
name='Op Mk Pr'), row=2, col=1)
fig.add_trace(go.Scatter(
x=IPADf.index, y=IPADfGraph.UnderlyingPrice,
name=f"{underlying} Undrlyg Pr"), row=3, col=1)
fig.update(layout_xaxis_rangeslider_visible=False)
fig.update_layout(title=IPADfGraph.columns.name)
fig.update_layout(
title=instrument,
template=graphTemplate,
autosize=False,
width=1300,
height=500)
fig.show()
else:
print("Looks like the agrument `graphStyle` used is incorrect. Try `simple`, `overlay` or `3 graphs`")
ImpVolatilityCalcIPA( # This will pick up 10 min data
x=15,
indexUnderlying=".SPX", # ".SPX" or ".STOXX50E"
callOrPut='Call', # 'Put' or 'Call'
dateBack=7,
expiryYearOfInterest=datetime.now().year,
riskFreeRate=None,
riskFreeRateField=None, # 'TR.FIXINGVALUE'
timeZoneInGraph=datetime.now().astimezone(),
maxColwidth=200,
graphStyle='overlay', # 'overlay', '3 graphs', 'simple'
simpleGraphLineStyle='.-', # 'o-'
simpleGraphSize=(15, 5),
graphTemplate='plotly_dark',
debug=False,
returnDfGraph=True,
AtOptionTradeOnly=True)
| DocumentTitle | RIC | StrikePrice | ExchangeCode | ExpiryDate | UnderlyingQuoteRIC | |
|---|---|---|---|---|---|---|
| 54 | OPRA S&P 500 Index Option 3985 Call Apr 2023 , Stock Index Cash Option, Call 3985 USD 21-Apr-2023, OPRA | SPXWd212339850.U | 3985 | OPQ | 2023-04-21 | [.SPX] |
# while True:
# # Code executed here
# clear_output(wait=True)
# ImpVolatilityCalcIPA(
# dateBack=3, indexUnderlying=".STOXX50E", callOrPut='Call',
# graphStyle='simple', AtOptionTradeOnly=True)
# time.sleep(5)
The code in the cell below was written expertly by Haykaz Aramyan in the article 'Functions to find Option RICs traded on different exchanges'. I wanted to introduce it towards the end of this (current) article as it uses complex Python notions such as Classes. We look into reconstructing expiered option RICs which have different nomenclatures to live ones:

Below, we put ourselves in the shoes of an analyst backtesting a strategy involving past historical Implied Volatilities. E.g.: if the average 3-business-day historical Implied Volatility of an Option contract is too high, (s)he would not consider it in his(/her) portfolio.
Let's focuss on STOXX50E.
We are applyingsimillar logic to what was seen before, above. As a result, we'll use the same object names and simply add 2, from indexUnderlying2 onwards:
timeOfCalc2 = "2022-04-01"
indexUnderlying2 = ".STOXX50E"
timeOfCalcDatetime2 = datetime.strptime(timeOfCalc2, '%Y-%m-%d')
currentUnderlyingPrc2 = rd.get_history(
universe=[indexUnderlying2],
start=timeOfCalc2, # , end: "OptDateTime"=None
fields=["TRDPRC_1"],
interval="tick").iloc[-1][0]
currentUnderlyingPrc2
4282.63
if indexUnderlying2 == ".STOXX50E":
exchangeC2, exchangeRIC2, mcalGetCalendar2 = "EUX", "STX", "EUREX"
elif indexUnderlying == ".SPX":
exchangeC2, exchangeRIC2, mcalGetCalendar2 = "OPQ", "SPX", "CBOE_Futures"
exchangeC2, exchangeRIC2, mcalGetCalendar2
('EUX', 'STX', 'EUREX')
fullDatesAtTimeOfCalc2 = Get_exp_dates(
year=2022,
days=False,
mcal_get_calendar=mcalGetCalendar2)
fullDatesAtTimeOfCalc2
{2022: [datetime.date(2022, 1, 21),
datetime.date(2022, 2, 18),
datetime.date(2022, 3, 18),
datetime.date(2022, 4, 14),
datetime.date(2022, 5, 20),
datetime.date(2022, 6, 17),
datetime.date(2022, 7, 15),
datetime.date(2022, 8, 19),
datetime.date(2022, 9, 16),
datetime.date(2022, 10, 21),
datetime.date(2022, 11, 18),
datetime.date(2022, 12, 16)]}
fullDatesAtTimeOfCalcDatetime2 = [
datetime(i.year, i.month, i.day)
for i in fullDatesAtTimeOfCalc2[list(fullDatesAtTimeOfCalc2.keys())[0]]]
fullDatesAtTimeOfCalcDatetime2
[datetime.datetime(2022, 1, 21, 0, 0), datetime.datetime(2022, 2, 18, 0, 0), datetime.datetime(2022, 3, 18, 0, 0), datetime.datetime(2022, 4, 14, 0, 0), datetime.datetime(2022, 5, 20, 0, 0), datetime.datetime(2022, 6, 17, 0, 0), datetime.datetime(2022, 7, 15, 0, 0), datetime.datetime(2022, 8, 19, 0, 0), datetime.datetime(2022, 9, 16, 0, 0), datetime.datetime(2022, 10, 21, 0, 0), datetime.datetime(2022, 11, 18, 0, 0), datetime.datetime(2022, 12, 16, 0, 0)]
expiryDateOfInt2 = [i for i in fullDatesAtTimeOfCalcDatetime2
if i > timeOfCalcDatetime2 + relativedelta(days=x)][0]
expiryDateOfInt2
datetime.datetime(2022, 5, 20, 0, 0)
def Get_exp_month(exp_date, opt_type):
# define option expiration identifiers
ident = {
'1': {'exp': 'A', 'C': 'A', 'P': 'M'},
'2': {'exp': 'B', 'C': 'B', 'P': 'N'},
'3': {'exp': 'C', 'C': 'C', 'P': 'O'},
'4': {'exp': 'D', 'C': 'D', 'P': 'P'},
'5': {'exp': 'E', 'C': 'E', 'P': 'Q'},
'6': {'exp': 'F', 'C': 'F', 'P': 'R'},
'7': {'exp': 'G', 'C': 'G', 'P': 'S'},
'8': {'exp': 'H', 'C': 'H', 'P': 'T'},
'9': {'exp': 'I', 'C': 'I', 'P': 'U'},
'10': {'exp': 'J', 'C': 'J', 'P': 'V'},
'11': {'exp': 'K', 'C': 'K', 'P': 'W'},
'12': {'exp': 'L', 'C': 'L', 'P': 'X'}}
# get expiration month code for a month
if opt_type.upper() == 'C':
exp_month = ident[str(exp_date.month)]['C']
elif opt_type.upper() == 'P':
exp_month = ident[str(exp_date.month)]['P']
return ident, exp_month
def Check_ric(ric, maturity, ident):
exp_date = pd.Timestamp(maturity)
# get start and end date for get_historical_price_summaries
# query (take current date minus 90 days period)
sdate = (datetime.now() - timedelta(90)).strftime('%Y-%m-%d')
edate = datetime.now().strftime('%Y-%m-%d')
# check if option is matured. If yes, add expiration syntax and recalculate
# start and end date of the query (take expiration day minus 90 days period)
if pd.Timestamp(maturity) < datetime.now():
ric = ric + '^' + ident[str(exp_date.month)]['exp'] + str(exp_date.year)[-2:]
sdate = (exp_date - timedelta(90)).strftime('%Y-%m-%d')
edate = exp_date.strftime('%Y-%m-%d')
# request option prices. Please note, there is no settle price for OPRA traded options
fieldsRequest = ['BID', 'ASK', 'TRDPRC_1']
if not ric.split('.')[1][0] == 'U':
fieldsRequest.append('SETTLE')
prices = rd.content.historical_pricing.summaries.Definition(
ric, start=sdate, end=edate,
interval=rd.content.historical_pricing.Intervals.DAILY,
fields=fieldsRequest).get_data()
return ric, prices
def Get_ric_eurex(asset, maturity, strike, opt_type):
exp_date = pd.Timestamp(maturity)
if asset[0] == '.':
asset_name = asset[1:]
if asset_name == 'FTSE':
asset_name = 'OTUK'
elif asset_name == 'SSMI':
asset_name = 'OSMI'
elif asset_name == 'GDAXI':
asset_name = 'GDAX'
elif asset_name == 'ATX':
asset_name = 'FATXA'
elif asset_name == 'STOXX50E':
asset_name = 'STXE'
else:
asset_name = asset.split('.')[0]
ident, exp_month = Get_exp_month(
exp_date=exp_date, opt_type=opt_type)
if type(strike) == float:
int_part = int(strike)
dec_part = str(str(strike).split('.')[1])[0]
else:
int_part = int(strike)
dec_part = '0'
if len(str(int(strike))) == 1:
strike_ric = '0' + str(int_part) + dec_part
else:
strike_ric = str(int_part) + dec_part
possible_rics = []
generations = ['', 'a', 'b', 'c', 'd']
for gen in generations:
ric = asset_name + strike_ric + gen + exp_month + str(exp_date.year)[-1:] + '.EX'
ric, prices = Check_ric(ric, maturity, ident)
if prices is not None:
return ric, prices
else:
possible_rics.append(ric)
print(f'Here is a list of possible RICs {possible_rics}, however we could not find any prices for those!')
return ric, prices
int(round(currentUnderlyingPrc2, -2))
4300
instrument2, instrument2Prices = Get_ric_eurex(
asset='.STOXX50E', opt_type='P',
maturity=expiryDateOfInt2.strftime('%Y-%m-%d'),
strike=int(round(currentUnderlyingPrc2, -2)))
instrument2
'STXE43000Q2.EX^E22'
instrument2Prices.data.df
| STXE43000Q2.EX^E22 | BID | ASK | TRDPRC_1 | SETTLE |
|---|---|---|---|---|
| Date | ||||
| 2022-02-21 | 422.2 | 434.3 | NaN | 426.9 |
| 2022-02-22 | 420.1 | 433.1 | NaN | 424.7 |
| 2022-02-23 | 426.6 | 443.0 | NaN | 435.0 |
| 2022-02-24 | 552.2 | 573.1 | NaN | 563.0 |
| 2022-02-25 | 428.4 | 444.1 | NaN | 435.0 |
| ... | ... | ... | ... | ... |
| 2022-05-16 | 611.5 | 643.0 | 628.8 | 627.0 |
| 2022-05-17 | 548.7 | 580.3 | <NA> | 565.0 |
| 2022-05-18 | 604.7 | 634.2 | <NA> | 618.8 |
| 2022-05-19 | 651.8 | 685.1 | <NA> | 669.7 |
| 2022-05-20 | 577.6 | 640.8 | <NA> | 600.2 |
63 rows × 4 columns
# response = search.Definition(
# view=search.Views.SEARCH_ALL, # To see what views are available: `help(search.Views)` & `search.metadata.Definition(view = search.Views.SEARCH_ALL).get_data().data.df.to_excel("SEARCH_ALL.xlsx")`
# query=indexUnderlying,
# select="DocumentTitle, RIC, StrikePrice, ExchangeCode, ExpiryDate, UnderlyingAsset, \
# UnderlyingAssetName, UnderlyingAssetRIC, ESMAUnderlyingIndexCode, RCSUnderlyingMarket, \
# UnderlyingQuoteName, UnderlyingQuoteRIC",
# filter=f"RCSAssetCategoryLeaf eq 'Option' and RIC eq '{exchangeRIC}*' and DocumentTitle ne '*Weekly*' " +
# f"and CallPutOption eq '{callOrPut}' and ExchangeCode eq '{exchangeC}' and " +
# f"ExpiryDate ge {(expiryDateOfInt - relativedelta(days=1)).strftime('%Y-%m-%d')} " +
# f"and ExpiryDate lt {(expiryDateOfInt + relativedelta(days=1)).strftime('%Y-%m-%d')}", # ge (greater than or equal to), gt (greater than), lt (less than) and le (less than or equal to). These can only be applied to numeric and date properties.
# top=10000
# ).get_data()
# searchDf = response.data.df
# def Get_exchange_code(asset):
# # build search query to find exchange codes where the option on the given underlying is traded
# response = search.Definition(
# # view=search.Views.SEARCH_ALL, # To see what views are available: `help(search.Views)` & `search.metadata.Definition(view = search.Views.SEARCH_ALL).get_data().data.df.to_excel("SEARCH_ALL.xlsx")`
# query=asset,
# filter="SearchAllCategory eq 'Options' and Periodicity eq 'Monthly' ",
# select=' RIC, DocumentTitle, UnderlyingQuoteRIC,Periodicity, ExchangeCode',
# navigators="ExchangeCode",
# top=10000
# ).get_data()
# result = response.data.raw["Navigators"]["ExchangeCode"]
# # store a=exchange codes in a list
# exchange_codes = []
# for i in range(len(result['Buckets'])):
# code = result['Buckets'][i]['Label']
# exchange_codes.append(code)
# return exchange_codes
def Get_exchanges(asset, topNuSearchResults=10_000):
response = search.Definition(
query=asset,
filter="SearchAllCategory eq 'Options' and Periodicity eq 'Monthly' ",
select=' RIC, DocumentTitle, UnderlyingQuoteRIC,Periodicity, ExchangeCode',
navigators="ExchangeCode",
top=topNuSearchResults).get_data()
result = response.data.raw["Navigators"]["ExchangeCode"]
exchange_codes = []
for i in range(len(result['Buckets'])):
code = result['Buckets'][i]['Label']
exchange_codes.append(code)
return exchange_codes
from typing import Tuple, Union, Dict, List, Any
class Option_RIC:
"""
Option_RIC
"""
def __init__(
self,
maturity: str, # '2022-01-21'
strike: int,
opt_type: str, # 'C' or 'P'
asset: str = ".STOXX50E",
debug: bool = False,
topNuSearchResults: int = 10_000):
# Most objects are simple to define at this stage, but soon you'll see that two of them are a little more finicky
self.maturity = pd.Timestamp(maturity)
self.strike = strike
self.opt_type = opt_type
self.debug = debug
self.asset = asset
response = search.Definition(
query=asset,
filter="SearchAllCategory eq 'Options' and Periodicity eq 'Monthly' ",
select=' RIC, DocumentTitle, UnderlyingQuoteRIC,Periodicity, ExchangeCode',
navigators="ExchangeCode",
top=topNuSearchResults).get_data()
result = response.data.raw["Navigators"]["ExchangeCode"]
exchange_codes = []
for i in range(len(result['Buckets'])):
code = result['Buckets'][i]['Label']
exchange_codes.append(code)
self.exchange = exchange_codes
def Check_ric(self, ric, maturity):
"""
Support Function used within other functions in `Option_RIC` Class.
"""
exp_date = pd.Timestamp(maturity)
if pd.Timestamp(maturity) < datetime.now():
sdate = (exp_date - timedelta(90)).strftime('%Y-%m-%d')
edate = exp_date.strftime('%Y-%m-%d')
else:
sdate = (datetime.now() - timedelta(90)).strftime('%Y-%m-%d')
edate = datetime.now().strftime('%Y-%m-%d')
if ric.split('.')[1][0] == 'U':
prices = rd.content.historical_pricing.summaries.Definition(
ric, start=sdate, end=edate,
interval=rd.content.historical_pricing.Intervals.DAILY,
fields=['BID', 'ASK', 'TRDPRC_1']).get_data()
else:
prices = rd.content.historical_pricing.summaries.Definition(
ric, start=sdate, end=edate,
interval=rd.content.historical_pricing.Intervals.DAILY,
fields=['BID', 'ASK', 'TRDPRC_1', 'SETTLE']).get_data()
if self.debug:
print(f"Check_ric's (sdate, edate) = ({sdate}, {edate})")
return ric, prices
def Get_asset_and_exchange(self):
"""
Support Function used within other functions in `Option_RIC` Class.
"""
asset_in_ric: Dict[str, Dict[Union[str, Dict[str, str]]]] = {
'SSMI': {'EUX': 'OSMI'},
'GDAXI': {'EUX': 'GDAX'},
'ATX': {'EUX': 'FATXA'},
'STOXX50E': {'EUX': 'STXE'},
'FTSE': {'IEU': 'LFE', 'EUX': 'OTUK'},
'N225': {'OSA': 'JNI'},
'TOPX': {'OSA': 'JTI'}}
asset_exchange: Dict[str, str] = {}
if self.asset[0] != '.':
asset: str = self.asset.split('.')[0]
else:
asset: str = self.asset[1:]
for exch in self.exchange:
if asset in asset_in_ric:
asset_exchange[exch] = asset_in_ric[asset][exch]
else:
asset_exchange[exch] = asset
return asset_exchange
def Get_strike(self, exch):
if exch == 'OPQ':
if type(self.strike) == float:
int_part = int(self.strike)
dec_part = str(str(self.strike).split('.')[1])
else:
int_part = int(self.strike)
dec_part = '00'
if int(self.strike) < 10:
strike_ric = '00' + str(int_part) + dec_part
elif int_part >= 10 and int_part < 100:
strike_ric = '0' + str(int_part) + dec_part
elif int_part >= 100 and int_part < 1000:
strike_ric = str(int_part) + dec_part
elif int_part >= 1000 and int_part < 10000:
strike_ric = str(int_part) + '0'
elif int_part >= 10000 and int_part < 20000:
strike_ric = 'A' + str(int_part)[-4:]
elif int_part >= 20000 and int_part < 30000:
strike_ric = 'B' + str(int_part)[-4:]
elif int_part >= 30000 and int_part < 40000:
strike_ric = 'C' + str(int_part)[-4:]
elif int_part >= 40000 and int_part < 50000:
strike_ric = 'D' + str(int_part)[-4:]
elif exch == 'HKG' or exch == 'HFE':
if self.asset[0] == '.':
strike_ric = str(int(self.strike))
else:
strike_ric = str(int(self.strike * 100))
elif exch == 'OSA':
strike_ric = str(self.strike)[:3]
elif exch == 'EUX' or exch == 'IEU':
if type(self.strike) == float and len(str(int(self.strike))) == 1:
int_part = int(self.strike)
dec_part = str(str(self.strike).split('.')[1])[0]
strike_ric = '0' + str(int_part) + dec_part
elif (len(str(int(self.strike))) > 1 and exch == 'EUX'):
strike_ric = str(int(self.strike)) + '0'
elif (len(str(int(self.strike))) == 2 and exch == 'IEU'):
strike_ric = '0' + str(int(self.strike))
elif len(str(int(self.strike))) > 2 and exch == 'IEU':
strike_ric = str(int(self.strike))
return strike_ric
def Get_exp_month(self, exchange):
"""
Support Function used within other functions in `Option_RIC` Class.
"""
ident_opra = {
'1': {'exp': 'A', 'C_bigStrike': 'a', 'C_smallStrike': 'A',
'P_bigStrike': 'm', 'P_smallStrike': 'M'},
'2': {'exp': 'B', 'C_bigStrike': 'b', 'C_smallStrike': 'B',
'P_bigStrike': 'n', 'P_smallStrike': 'N'},
'3': {'exp': 'C', 'C_bigStrike': 'c', 'C_smallStrike': 'C',
'P_bigStrike': 'o', 'P_smallStrike': 'O'},
'4': {'exp': 'D', 'C_bigStrike': 'd', 'C_smallStrike': 'D',
'P_bigStrike': 'p', 'P_smallStrike': 'P'},
'5': {'exp': 'E', 'C_bigStrike': 'e', 'C_smallStrike': 'E',
'P_bigStrike': 'q', 'P_smallStrike': 'Q'},
'6': {'exp': 'F', 'C_bigStrike': 'f', 'C_smallStrike': 'F',
'P_bigStrike': 'r', 'P_smallStrike': 'R'},
'7': {'exp': 'G', 'C_bigStrike': 'g', 'C_smallStrike': 'G',
'P_bigStrike': 's', 'P_smallStrike': 'S'},
'8': {'exp': 'H', 'C_bigStrike': 'h', 'C_smallStrike': 'H',
'P_bigStrike': 't', 'P_smallStrike': 'T'},
'9': {'exp': 'I', 'C_bigStrike': 'i', 'C_smallStrike': 'I',
'P_bigStrike': 'u', 'P_smallStrike': 'U'},
'10': {'exp': 'J', 'C_bigStrike': 'j', 'C_smallStrike': 'J',
'P_bigStrike': 'v', 'P_smallStrike': 'V'},
'11': {'exp': 'K', 'C_bigStrike': 'k', 'C_smallStrike': 'K',
'P_bigStrike': 'w', 'P_smallStrike': 'W'},
'12': {'exp': 'L', 'C_bigStrike': 'l', 'C_smallStrike': 'L',
'P_bigStrike': 'x', 'P_smallStrike': 'X'}}
ident_all = {
'1': {'exp': 'A', 'C': 'A', 'P': 'M'},
'2': {'exp': 'B', 'C': 'B', 'P': 'N'},
'3': {'exp': 'C', 'C': 'C', 'P': 'O'},
'4': {'exp': 'D', 'C': 'D', 'P': 'P'},
'5': {'exp': 'E', 'C': 'E', 'P': 'Q'},
'6': {'exp': 'F', 'C': 'F', 'P': 'R'},
'7': {'exp': 'G', 'C': 'G', 'P': 'S'},
'8': {'exp': 'H', 'C': 'H', 'P': 'T'},
'9': {'exp': 'I', 'C': 'I', 'P': 'U'},
'10': {'exp': 'J', 'C': 'J', 'P': 'V'},
'11': {'exp': 'K', 'C': 'K', 'P': 'W'},
'12': {'exp': 'L', 'C': 'L', 'P': 'X'}}
if exchange == 'OPQ':
if self.strike > 999.999:
exp_month_code = ident_opra[str(
self.maturity.month)][self.opt_type + '_bigStrike']
else:
exp_month_code = ident_opra[str(
self.maturity.month)][self.opt_type + '_smallStrike']
else:
exp_month_code = ident_all[str(self.maturity.month)][self.opt_type]
if self.maturity < datetime.now():
expired = '^' + \
ident_all[str(self.maturity.month)]['exp'] + \
str(self.maturity.year)[-2:]
else:
expired = ''
return exp_month_code, expired
def RIC_prices(self, ric, ricPrices):
if self.debug:
print(f"ricPrices's ric: {ric}")
print(f"self.maturity: {self.maturity}")
ric, prices = self.Check_ric(ric, self.maturity)
if prices is not None:
valid_ric = {ric: prices}
ricPrices['valid_ric'].append(valid_ric)
else:
ricPrices['potential_rics'].append(ric)
return ricPrices
def Construct_RIC(self):
asset_exchange = self.Get_asset_and_exchange()
supported_exchanges = ['OPQ', 'IEU', 'EUX', 'HKG', 'HFE', 'OSA']
ricPrices = {'valid_ric': [], 'potential_rics': []}
for exchange, asset in asset_exchange.items():
if exchange in supported_exchanges:
strike_ric = self.Get_strike(exchange)
exp_month_code, expired = self.Get_exp_month(exchange)
if exchange == 'OPQ':
ric = asset + exp_month_code + \
str(self.maturity.day) + \
str(self.maturity.year)[-2:] + \
strike_ric + '.U' + expired
ricPrices = self.RIC_prices(ric, ricPrices)
elif exchange == 'HKG' or exchange == 'HFE':
gen_len = ['0', '1', '2', '3']
if exchange == 'HFE':
gen_len = ['']
for i in gen_len:
exchs = {'HKG': {'exch_code': '.HK', 'gen': str(i)},
'HFE': { 'exch_code': '.HF', 'gen': ''}}
ric = asset + strike_ric + exchs[exchange]['gen'] + exp_month_code + str(
self.maturity.year)[-1:] + exchs[exchange]['exch_code'] + expired
ricPrices = self.RIC_prices(ric, ricPrices)
elif exchange == 'OSA':
for jnet in ['', 'L', 'R']:
if self.asset[0] == '.':
ric = asset + jnet + strike_ric + exp_month_code + \
str(self.maturity.year)[-1:] + '.OS' + expired
ricPrices = self.RIC_prices(ric, ricPrices)
else:
for gen in ['Y', 'Z', 'A', 'B', 'C']:
ric = asset + jnet + gen + strike_ric + exp_month_code + \
str(self.maturity.year)[-1:] + \
'.OS' + expired
ricPrices = self.RIC_prices(ric, ricPrices)
elif exchange == 'EUX' or exchange == 'IEU':
exchs = {'EUX': '.EX', 'IEU': '.L'}
for gen in ['', 'a', 'b', 'c', 'd']:
ric = asset + strike_ric + gen + exp_month_code + \
str(self.maturity.year)[-1:] + \
exchs[exchange] + expired
if self.debug: print(f"Construct_RIC's ric: {ric}")
try:
ricPrices = self.RIC_prices(ric, ricPrices)
except:
if self.debug:
print("Error for self.RIC_prices(ric, ricPrices)")
else:
print(f'The {exchange} exchange is not supported yet')
return ricPrices
timeOfCalc2 = '2021-01-21'
indexUnderlying2 = ".STOXX50E"
timeOfCalcDatetime2 = datetime.strptime(timeOfCalc2, '%Y-%m-%d')
currentUnderlyingPrc2 = rd.get_history(
universe=[indexUnderlying2],
start=timeOfCalc2, # , end: "OptDateTime"=None
fields=["TRDPRC_1"],
interval="tick").iloc[-1][0]
currentUnderlyingPrc2
4282.63
STOXX50ETest1 = Option_RIC(
maturity=timeOfCalc2,
strike=int(round(currentUnderlyingPrc2, -2)),
opt_type='P',
asset=indexUnderlying2,
debug=False)
STOXX50ETest2 = STOXX50ETest1.Construct_RIC()
list(STOXX50ETest2['valid_ric'][0].keys())[0]
'STXE43000M1.EX^A21'
STOXX50ETest2['valid_ric'][0][list(STOXX50ETest2['valid_ric'][0].keys())[0]].data.df.head()
| STXE43000M1.EX^A21 | BID | ASK | TRDPRC_1 | SETTLE |
|---|---|---|---|---|
| Date | ||||
| 2020-10-23 | 1118.2 | 1123.3 | <NA> | 1122.7 |
| 2020-10-26 | 1193.9 | 1225.8 | <NA> | 1209.3 |
| 2020-10-27 | 1223.1 | 1255 | <NA> | 1238.3 |
| 2020-10-28 | 1344.4 | 1376.2 | <NA> | 1360.4 |
| 2020-10-29 | 1342.5 | 1373.8 | <NA> | 1357.2 |
timeOfCalc2 + 'T11:53:10.166926'
'2021-01-21T11:53:10.166926'
STOXX50ETest2OptnMrktPrice = rd.get_history(
universe=list(STOXX50ETest2['valid_ric'][0].keys()),
fields=["TRDPRC_1"],
interval="10min",
start=timeOfCalc2 + 'T11:53:09.168706',
end=timeOfCalc2 + 'T11:53:10.166926') # Ought to always end at 8 pm for OPRA exchanged Options, more info in the article below
STOXX50ETest2OptnMrktPrice
timeOfCalc3 = '2022-02-10'
indexUnderlying3 = ".SPX"
timeOfCalcDatetime3 = datetime.strptime(timeOfCalc3, '%Y-%m-%d')
currentUnderlyingPrc3 = rd.get_history(
universe=[indexUnderlying3],
start=timeOfCalc3, # , end: "OptDateTime"=None
fields=["TRDPRC_1"],
interval="tick").iloc[-1][0]
currentUnderlyingPrc3
3986.37
SPXTest2 = Option_RIC(
maturity='2022-01-21',
strike=int(round(currentUnderlyingPrc3, -2)),
opt_type='P',
asset=indexUnderlying3,
debug=False)
SPXTest2 = SPXTest2.Construct_RIC()
list(SPXTest2['valid_ric'][0].keys())[0]
'SPXm212240000.U^A22'
SPXTest2['valid_ric'][0][list(SPXTest2['valid_ric'][0].keys())[0]].data.df
| SPXm212240000.U^A22 | BID | ASK | TRDPRC_1 |
|---|---|---|---|
| Date | |||
| 2021-10-25 | 33.80 | 34.40 | 34.42 |
| 2021-10-26 | 34.70 | 35.30 | 34.95 |
| 2021-10-27 | 38.70 | 39.40 | 39.20 |
| 2021-10-28 | 34.00 | 34.60 | 33.54 |
| 2021-10-29 | 32.10 | 32.60 | 34.13 |
| ... | ... | ... | ... |
| 2022-01-13 | 0.55 | 0.65 | 0.57 |
| 2022-01-14 | 0.35 | 0.50 | 0.40 |
| 2022-01-18 | 0.35 | 0.45 | 0.40 |
| 2022-01-19 | 0.20 | 0.30 | 0.25 |
| 2022-01-20 | 0.05 | 0.15 | 0.15 |
61 rows × 3 columns
SPXTest2OptnMrktPrice = rd.get_history(
universe=list(SPXTest2['valid_ric'][0].keys())[0],
fields=["TRDPRC_1"],
interval="10min",
start=timeOfCalc2 + 'T11:53:09.168706',
end=timeOfCalc2 + 'T11:53:10.166926') # Ought to always end at 8 pm for OPRA exchanged Options, more info in the article below
SPXTest2OptnMrktPrice
timeOfCalc2+'T11:53:09.168706'
'2021-01-21T11:53:09.168706'
rd.content.historical_pricing.summaries.Definition(
universe ='STXE43000M1.EX^A21',
start='2021-01-23',
end='2021-01-29',
# fields=['BID', 'ASK', 'TRDPRC_1', 'SETTLE']
interval=rd.content.historical_pricing.Intervals.DAILY).get_data()
--------------------------------------------------------------------------- RDError Traceback (most recent call last) Cell In[125], line 1 ----> 1 rd.content.historical_pricing.summaries.Definition( 2 universe ='STXE43000M1.EX^A21', 3 start='2021-01-23', 4 end='2021-01-29', 5 # fields=['BID', 'ASK', 'TRDPRC_1', 'SETTLE'] 6 interval=rd.content.historical_pricing.Intervals.DAILY).get_data() File C:\JupyterLab\resources\jlab_server\lib\site-packages\refinitiv\data\delivery\_data\_data_provider.py:666, in DataProviderLayer.get_data(self, session, on_response, **kwargs) 664 api_config = get_api_config(data_type, config) 665 auto_retry = api_config.get("auto-retry", False) --> 666 response = self._provider.get_data( 667 session, url, auto_retry=auto_retry, **kwargs, **self._kwargs 668 ) 669 on_response and emit_event(on_response, response, self, session) 670 self._check_response(response, config) File C:\JupyterLab\resources\jlab_server\lib\site-packages\refinitiv\data\content\_content_provider.py:258, in HistoricalDataProvider.get_data(self, *args, **kwargs) 255 raise exception 256 responses.append(fut.result()) --> 258 validate_responses(responses) 259 return self._join_responses(responses, universe, fields, kwargs) File C:\JupyterLab\resources\jlab_server\lib\site-packages\refinitiv\data\content\_content_provider.py:297, in validate_responses(responses) 295 error_message += f"({error.code}, {sub_error_message}), " 296 error_message = error_message[:-2] --> 297 raise RDError(1, f"No data to return, please check errors: {error_message}") RDError: Error code 1 | No data to return, please check errors: ERROR: No successful response. (None, None)
rd.content.historical_pricing.summaries.Definition(
universe ='STXE43000M1.EX^A21',
start='2021-01-23T11:53:09.168706',
end='2021-01-26T19:53:10.166926',
# fields=['BID', 'ASK', 'TRDPRC_1', 'SETTLE']
interval=rd.content.historical_pricing.Intervals.MINUTE).get_data()
--------------------------------------------------------------------------- RDError Traceback (most recent call last) Cell In[122], line 1 ----> 1 rd.content.historical_pricing.summaries.Definition( 2 universe ='STXE43000M1.EX^A21 ', 3 start='2021-01-26T11:53:09.168706', 4 end='2021-01-26T19:53:10.166926', 5 # fields=['BID', 'ASK', 'TRDPRC_1', 'SETTLE'] 6 interval=rd.content.historical_pricing.Intervals.MINUTE).get_data() File C:\JupyterLab\resources\jlab_server\lib\site-packages\refinitiv\data\delivery\_data\_data_provider.py:666, in DataProviderLayer.get_data(self, session, on_response, **kwargs) 664 api_config = get_api_config(data_type, config) 665 auto_retry = api_config.get("auto-retry", False) --> 666 response = self._provider.get_data( 667 session, url, auto_retry=auto_retry, **kwargs, **self._kwargs 668 ) 669 on_response and emit_event(on_response, response, self, session) 670 self._check_response(response, config) File C:\JupyterLab\resources\jlab_server\lib\site-packages\refinitiv\data\content\_content_provider.py:258, in HistoricalDataProvider.get_data(self, *args, **kwargs) 255 raise exception 256 responses.append(fut.result()) --> 258 validate_responses(responses) 259 return self._join_responses(responses, universe, fields, kwargs) File C:\JupyterLab\resources\jlab_server\lib\site-packages\refinitiv\data\content\_content_provider.py:297, in validate_responses(responses) 295 error_message += f"({error.code}, {sub_error_message}), " 296 error_message = error_message[:-2] --> 297 raise RDError(1, f"No data to return, please check errors: {error_message}") RDError: Error code 1 | No data to return, please check errors: ERROR: No successful response. (None, None)
rd.content.historical_pricing.Intervals.MINUTE
help(rd.content.historical_pricing.summaries.Definition
Help on class Definition in module refinitiv.data.content.historical_pricing._summaries_definition:
class Definition(refinitiv.data.delivery._data._data_provider.DataProviderLayer)
| Definition(*args, **kwds)
|
| Summary line of this class that defines parameters for requesting summaries from historical pricing
|
| Parameters
| ----------
| universe : str or list of str
| The entity universe
| interval : str or Intervals, optional
| The consolidation interval in ISO8601
| start : str or date or datetime or timedelta, optional
| The start date and timestamp of the query in ISO8601 with UTC only
| end : str or date or datetime or timedelta, optional
| The end date and timestamp of the query in ISO8601 with UTC only
| adjustments : list of Adjustments or Adjustments or str, optional
| The adjustment list or Adjustments type
| sessions : list of MarketSession or MarketSession or str, optional
| The list of market session classification or str
| count : int, optional
| The maximum number of data returned. Values range: 1 - 10000
| fields : list, optional
| The list of fields that are to be returned in the response
| closure : Any, optional
| Specifies the parameter that will be merged with the request
| extended_params : dict, optional
| If necessary other parameters
|
| Examples
| --------
| >>> from refinitiv.data.content.historical_pricing import summaries
| >>> definition_summaries = summaries.Definition("EUR")
| >>> response = definition_summaries.get_data()
|
| Method resolution order:
| Definition
| refinitiv.data.delivery._data._data_provider.DataProviderLayer
| typing.Generic
| builtins.object
|
| Methods defined here:
|
| __init__(self, universe: Union[str, List[str]], interval: Union[str, refinitiv.data.content._intervals.Intervals] = None, start: Union[str, datetime.date, datetime.datetime, datetime.timedelta, NoneType] = None, end: Union[str, datetime.date, datetime.datetime, datetime.timedelta, NoneType] = None, adjustments: Union[List[refinitiv.data.content.historical_pricing._hp_data_provider.Adjustments], refinitiv.data.content.historical_pricing._hp_data_provider.Adjustments, str] = None, sessions: Union[List[refinitiv.data.content.historical_pricing._hp_data_provider.MarketSession], refinitiv.data.content.historical_pricing._hp_data_provider.MarketSession, str] = None, count: Union[int, NoneType] = None, fields: Union[list, NoneType] = None, closure: Union[Any, NoneType] = None, extended_params: Union[dict, NoneType] = None) -> None
| Initialize self. See help(type(self)) for accurate signature.
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __orig_bases__ = (refinitiv.data.delivery._data._data_provider.Dat...f...
|
| __parameters__ = ()
|
| ----------------------------------------------------------------------
| Methods inherited from refinitiv.data.delivery._data._data_provider.DataProviderLayer:
|
| __repr__(self)
| Return repr(self).
|
| get_data(self, session=None, on_response=None, **kwargs) -> ~T
|
| async get_data_async(self, session=None, on_response=None, **kwargs) -> ~T
|
| ----------------------------------------------------------------------
| Data descriptors inherited from refinitiv.data.delivery._data._data_provider.DataProviderLayer:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| ----------------------------------------------------------------------
| Class methods inherited from typing.Generic:
|
| __class_getitem__(params) from builtins.type
|
| __init_subclass__(*args, **kwargs) from builtins.type
| This method is called when a class is subclassed.
|
| The default implementation does nothing. It may be
| overridden to extend subclasses.
|
| ----------------------------------------------------------------------
| Static methods inherited from typing.Generic:
|
| __new__(cls, *args, **kwds)
| Create and return a new object. See help(type) for accurate signature.
def get_potential_rics(year, trans_day, asset, OTM_size, diff, opt_type):
'''
This function returns the list of potential option RICs for a specified year reconstructed based on Refinitiv RIC and option trading rules.
Dependencies
----------------------------------------------
Python library 'Refinitiv Dataplatform' version 1.0.0a8.post1
Parameters
-----------------------------------------------
Input:
year (int): year for which transaction days are requested
trans_day (str, default = 'first'): takes either 'first' or 'third' indicating to the first business day or the 3rd Friday of a month respectively
asset (str): RIC code of the asset
OTM_size (int): percentage number indicating how far away is the strike price from the price of the underlying asset
diff (int): Tolarated difference in OTM to construct upper and lower bounds of strike prices
opt_type (str): takes either "call" or "put"
Output:
potential_RICs (dict): dictionary containing potential RICs for each month with strike prices from the lower to upper bounds of strikes
strikes (list): list of the strike prices calculated based on OTM size for each month
'''
# open file to report log of the function output
report = open("Log report.txt", "a")
# call functions to get expiration and transaction days
trans_days = get_trans_days(year, trans_day)
trans_days_prev = get_trans_days(year - 1, trans_day)
dates = get_exp_dates(year)
# trim underlying asset's RIC to get the required part for option RIC
if asset[0] == '.': # check if the asset is an index or an equity
asset_name = asset[1:] # get the asset name - we remove "." symbol for index options
adj_factors = 12 * [1] # set adjustment factors to be equal to 1 for each month (no stock split corporate event is applicable to indices)
else:
asset_name = asset.split('.')[0] # we need only the first part of the RICs for equities
# get list of corporate events for equities
corp_event = rdp.get_data(universe = asset,
fields = ["TR.CAEffectiveDate", "TR.CAAdjustmentFactor", "TR.CAAdjustmentType"],
parameters = {"CAEventType": "SSP", "SDate": datetime.today().strftime("%Y-%m-%d"), "EDate": "-50Y"})
# run adjustment_factor function to get the factors
adj_factors = adjustment_factor(corp_event, year = year, trans_day = trans_day)
# define expiration month codes to be used after "^" sign
exp = ['A', 'B', 'C', 'D','E','F','G', 'H', 'I', 'J', 'K', 'L']
potential_RICs = {}
strikes = []
# construct potential RICs for each month of a specified year
for j in range(1,13):
# get day of expiration for a month
day = dates[year][j - 1]
# get date of price request, which is in the previous month of expiration
if j != 1:
date = str(year) + '-' + str(j - 1) + '-' + str(trans_days[j - 2])
if j == 1: # for January, we need to subtract a year along with the month
date = str(year - 1) + '-' + str(j + 11) + '-' + str(trans_days_prev[j + 10])
# get price of underlying asset as of the transaction date
# get the corresponding adjustment factor for the month
adj_factor = adj_factors[j-1]
price = rdp.get_data(asset, fields = ['TR.PriceClose'], parameters = {'SDate': date})
price = float(price.iloc[0,1]) / adj_factor # adjust prices by the adjustment factor. if no sptick split events adj_factor = 1
# calculate the strike price for call options
if opt_type.lower() == 'call':
strike = price + price * OTM_size / 100
# define expiration month codes for call options while also considering the strike price
if strike > 999.999:
exp_codes_call = ['a', 'b', 'c', 'd','e','f','g', 'h', 'i', 'j', 'k', 'l']
else:
exp_codes_call = ['A', 'B', 'C', 'D','E','F','G', 'H', 'I', 'J', 'K', 'L']
# get expiration month code for a month
exp_month = exp_codes_call[j-1]
# calculate the strike price and get expiration month code for a month for put options
elif opt_type.lower() == 'put':
strike = price - price * OTM_size/100
if strike > 999.999:
exp_codes_put = ['m', 'n', 'o', 'p','q','r','s', 't', 'u', 'v', 'w', 'x']
else:
exp_codes_put = ['M', 'N', 'O', 'P','Q','R','S', 'T', 'U', 'V', 'W', 'X']
exp_month = exp_codes_put[j-1]
strikes.append(int(round(strike,0))) # append the calculated strike price to the list of strikes
# calculate lower and upper bounds for strikes considering the value of the strike
if strike > 999.999:
step = 5 # we loop over strikes with a step 5 for larger strikes
strike_ub = int(round((strike + strike * diff / 100),-1))
strike_lb = int(round((strike - strike * diff / 100),-1))
else:
step = 1 # we loop over strikes with a step 1 for smaller strikes
strike_ub = int(strike + strike * diff / 100)
strike_lb = int(strike - strike * diff / 100)
# construct RICs for each strike from the lower to upper bound ranges of strikes
for n in range(strike_lb, strike_ub + step, step):
k = None # for strikes < 1000 along with 1 step increment change in strikes we do 0.5 point increment change which
#allows us to consider strikes with decimal points. This is important to get closer OTMs for smaller valued assets.
# here we construct option RICs by adding together all the RIC components
# Please note some of the components are different depending on the strike value
plc_holdr1 = asset_name + exp_month + str(day) + str(year)[-2:]
plc_holdr2 = exp[j - 1] + str(year)[-2:]
if n < 10:
z = plc_holdr1 + '00' + str(n) + '00.U^' + plc_holdr2# for integer steps
k = plc_holdr1 + '00' + str(n) + '50.U^' + plc_holdr2# for decimal steps
elif n >= 10 and n < 100:
z = plc_holdr1 + '0' + str(n) + '00.U^' + plc_holdr2
k = plc_holdr1 + '0' + str(n) + '50.U^' + plc_holdr2
if n >= 100 and n < 1000:
z = plc_holdr1 + str(n) + '00.U^' + plc_holdr2
k = plc_holdr1 + str(n) + '50.U^' + plc_holdr2
elif n >= 1000 and n < 10000:
z = plc_holdr1 + str(n) + '0.U^' + plc_holdr2
elif n >= 10000 and n < 20000:
z = plc_holdr1 + 'A' + str(n)[-4:] + '.U^' + plc_holdr2
elif n >= 20000 and n < 30000:
z = plc_holdr1 + 'B' + str(n)[-4:] + '.U^' + plc_holdr2
elif n >= 30000 and n < 40000:
z = plc_holdr1 + 'C' + str(n)[-4:] + '.U^' + plc_holdr2
elif n >= 40000 and n < 50000:
z = plc_holdr1 + 'D' + str(n)[-4:] + '.U^' + plc_holdr2
# append RICs with integer strikes to the dictionary
if j in potential_RICs:
potential_RICs[j].append(z)
# append RICs with decimal point strikes to the dictionary
if k is not None:
potential_RICs[j].append(k)
else:
potential_RICs[j] = [z]
if k is not None:
potential_RICs[j].append(k)
# report funtion results and close the log file
now = {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
report.write(f'{now}: Potential RICs for {opt_type} options with {OTM_size}% OTM for {year} are constructed\n')
report.close()
return potential_RICs, strikes
We are now going to look into using PEP 3107 (and PEP 484) (and some decorators).
import nb_mypy # !pip3 install nb_mypy --trusted-host pypi.org # https://pypi.org/project/nb-mypy/ # https://gitlab.tue.nl/jupyter-projects/nb_mypy/-/blob/master/Nb_Mypy.ipynb
%load_ext nb_mypy
Version 1.0.4
%reload_ext nb_mypy
Version 1.0.4
%nb_mypy On
%nb_mypy DebugOff
# %nb_mypy unknown
from datetime import date as dtdate
import pandas_market_calendars as mcal # See `https://github.com/rsheftel/pandas_market_calendars/blob/master/examples/usage.ipynb` for info on this market calendar library
from typing import Tuple, Union, Dict, List, Any
import numpy as np
import calendar
from __future__ import annotations # This native library allows us to use not-yet-fully-deffined classes as a Type Hint when inside that class.
import refinitiv.data as rd # This is LSEG's Data and Analytics' API wrapper, called the Refinitiv Data Library for Python.
from refinitiv.data.content import historical_pricing # We will use this Python Class in `rd` to show the Implied Volatility data already available before our work.
from refinitiv.data.content import search # We will use this Python Class in `rd` to fid the instrument we are after, closest to At The Money.
from refinitiv.data.content.ipa.financial_contracts import option # We're going to need thtis to use the content layer of the RD library and the calculators of greeks and Impl Volat in IPA & ETI
import refinitiv.data.content.ipa.financial_contracts as rdf # We're going to need thtis to use the content layer of the RD library and the calculators of greeks and Impl Volat in Instrument Pricing Analytics (IPA) and Exchange Traded Instruments (ETI)
import numpy as np # We need `numpy` for mathematical and array manipilations.
import pandas as pd # We need `pandas` for datafame and array manipilations.
import calendar # We use `calendar` to identify holidays and maturity dates of intruments of interest.
import pytz # We use `pytz` to manipulate time values aiding `calendar` library. to import its types, you might need to run `!python3 -m pip install types-pytz`
import pandas_market_calendars as mcal # Used to identify holidays. See `https://github.com/rsheftel/pandas_market_calendars/blob/master/examples/usage.ipynb` for info on this market calendar library
from datetime import datetime, timedelta, timezone # We use these to manipulate time values
from dateutil.relativedelta import relativedelta # We use `relativedelta` to manipulate time values aiding `calendar` library.
# `plotly` is a library used to render interactive graphs:
import plotly
import plotly.graph_objects as go
import plotly.express as px # This is just to see the implied vol graph when that field is available
import matplotlib.pyplot as plt # We use `matplotlib` to just in case users do not have an environment suited to `plotly`.
from IPython.display import display, clear_output # We use `clear_output` for users who wish to loop graph production on a regular basis.
# Let's authenticate ourseves to LSEG's Data and Analytics service, Refinitiv:
try: # The following libraries are not available in Codebook, thus this try loop
rd.open_session(config_name="C:\\Example.DataLibrary.Python-main\\Example.DataLibrary.Python-main\\Configuration\\refinitiv-data.config.json")
rd.open_session("desktop.workspace")
except:
rd.open_session()
print(f"Here we are using the refinitiv Data Library version {rd.__version__}")
<cell>10: error: Library stubs not installed for "pytz" [import] <cell>10: note: Hint: "python3 -m pip install types-pytz" <cell>13: error: Library stubs not installed for "dateutil.relativedelta" [import] <cell>13: note: Hint: "python3 -m pip install types-python-dateutil" <cell>13: note: (or run "mypy --install-types" to install all missing stub packages) <cell>13: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports
Here we are using the refinitiv Data Library version 1.0.0b20
class IndexImpVolatNGreeksIPACalc(): # All about Type Hints here: https://realpython.com/python-type-checking/#static-type-checking
def __init__( # Constroctor
self,
indexUnderlying: str=".STOXX50E"
):
self.indexUnderlying: str=indexUnderlying
# self.expiryYearOfInterest: int = datetime.now().year
# self.riskFreeRate: Union[str, None] = None
# self.riskFreeRateField: Union[str, None] = None
# self.graphStyle: str = 'without out of trading hours' # 'overlay', '3 graphs', 'simple'
# self.graphTemplate: str = 'plotly_dark'
# self.debug: bool = False
# self.returnDfGraph: bool = False
# # def change_attrs(self, **kwargs): for kwarg in kwargs: self.__setattr__(kwarg, kwargs[kwarg])
def Get_exp_dates(
self,
year: int=datetime.now().year,
days: bool=True,
mcal_get_calendar: str='EUREX'
) -> Dict[int, Union[dtdate, str]]:
'''
Get_exp_dates Version 4.0:
This function gets expiration dates for a year for NDX options, which are the 3rd Fridays of each month.
Changes
----------------------------------------------
Changed from Version 1.0 to 2.0: Jonathan Legrand changed Haykaz Aramyan's original code to allow
(i) for the function's holiday argument to be changed, and defaulted to 'EUREX' as opposed to 'CBOE_Index_Options' and
(ii) for the function to output full date objects as opposed to just days of the month if agument days=True.
Changed from Version 2.0 to 3.0: Jonathan Legrand changed this function to reflec the fact that it can be used for indexes other than EUREX.
Changed from Version 3.0 to 4.0: Jonathan Legrand changed this function to be in line with PEP 3107 (type hints).
Dependencies
----------------------------------------------
Python library 'pandas_market_calendars' version 3.2
Parameters
-----------------------------------------------
Input:
year(int): year for which expiration days are requested
mcal_get_calendar(str): String of the calendar for which holidays have to be taken into account. More on this calendar (link to Github chacked 2022-10-11): https://github.com/rsheftel/pandas_market_calendars/blob/177e7922c7df5ad249b0d066b5c9e730a3ee8596/pandas_market_calendars/exchange_calendar_cboe.py
Default: mcal_get_calendar='EUREX'
days(bool): If True, only days of the month is outputed, else it's dataeime objects
Default: days=True
Output
-----------------------------------------------
dates(dict): dictionary of expiration days for each month of a specified year in datetime.date format.
'''
i: int # this is for the 'for loop' in this function coming below
# get CBOE market holidays
Cal: mcal.get_calendar=mcal.get_calendar(mcal_get_calendar)
holidays: Tuple[np.datetime64, ...]=Cal.holidays().holidays
# set calendar starting from Saturday
c: calendar.Calendar=calendar.Calendar(firstweekday=calendar.SATURDAY)
# get the 3rd Friday of each month
exp_dates: dict={} # https://stackoverflow.com/questions/48054521/indicating-multiple-value-in-a-dict-for-type-hints
date: dtdate
for i in range(1, 13):
date = c.monthdatescalendar(year, i)[2][-1]
# check if found date is an holiday and get the previous date if it is
if date in holidays:
date = date + timedelta(-1)
# append the date to the dictionary
if year in exp_dates and days:
exp_dates[year].append(date.day)
elif year in exp_dates:
exp_dates[year].append(date)
elif days:
exp_dates[year] = [date.day]
else:
exp_dates[year] = [date]
return exp_dates
def Search_index_opt_ATM(
self,
debug: bool = False,
after: int = 15,
callOrPut: str = 'Put',
searchFields: List[str] = ["ExchangeCode", "UnderlyingQuoteName"],
includeWeeklyOpts: bool = False,
topNuSearchResults: int = 10_000,
timeOfCalcDatetime: datetime = datetime.now(), # Here we allow for historical analysis.
underMrktPriceField: str = "TRDPRC_1"
) -> IndexImpVolatNGreeksIPACalc:
self.after = after
self.timeOfCalcDatetime = timeOfCalcDatetime
self.underMrktPriceField = underMrktPriceField
i: int; j: dtdate; k: str # this is for the 'for loop' in this function coming below
self.exchangeC: str; self.exchangeRIC: str; self.mcalGetCalendar: str
if self.indexUnderlying == ".STOXX50E":
self.exchangeC, self.exchangeRIC, self.mcalGetCalendar = 'EUX', 'STX', 'EUREX'
elif self.indexUnderlying == '.SPX':
self.exchangeC, self.exchangeRIC, self.mcalGetCalendar = 'OPQ', 'SPX', 'CBOE_Futures' # 'CBOE_Index_Options' # should be 'CBOE_Index_Options'... CBOT_Equity
timeOfCalcStr: str=timeOfCalcDatetime.strftime('%Y-%m-%d')
fullDatesAtTimeOfCalc: dict = self.Get_exp_dates(
year=timeOfCalcDatetime.year,
days=False,
mcal_get_calendar=self.mcalGetCalendar)
fullDatesAtTimeOfCalcDatetime: List[datetime] = [
datetime(j.year, j.month, j.day)
for j in fullDatesAtTimeOfCalc[
list(fullDatesAtTimeOfCalc.keys())[0]]]
expiryDateOfInt: datetime = [
j for j in fullDatesAtTimeOfCalcDatetime
if j > timeOfCalcDatetime + relativedelta(days=self.after)][0]
if debug: print(f"expiryDateOfInt: {expiryDateOfInt}")
# Certain search fields are nessesary for the next steps, so let's add them to the `searchFields` object:
for k in ['DocumentTitle', 'RIC', 'StrikePrice', 'UnderlyingQuoteRIC'][::-1]: # the `[::-1]` reverses the list
searchFields.insert(0, k)
# Now let's build our Search filter:
_filter: str = f"RCSAssetCategoryLeaf eq 'Option' \
and RIC eq '{self.exchangeRIC}*' \
and CallPutOption eq '{callOrPut}' \
and ExchangeCode eq '{self.exchangeC}' \
and ExpiryDate ge {(expiryDateOfInt - relativedelta(days=1)).strftime('%Y-%m-%d')} \
and ExpiryDate lt {(expiryDateOfInt + relativedelta(days=1)).strftime('%Y-%m-%d')}"
if not includeWeeklyOpts:
_filter += " and DocumentTitle ne '*Weekly*'"
response1 = search.Definition(
view=search.Views.SEARCH_ALL, # To see what views are available: `help(search.Views)` & `search.metadata.Definition(view = search.Views.SEARCH_ALL).get_data().data.df.to_excel("SEARCH_ALL.xlsx")`
query=self.indexUnderlying,
select=', '.join(map(str, searchFields)),
filter=_filter, # ge (greater than or equal to), gt (greater than), lt (less than) and le (less than or equal to). These can only be applied to numeric and date properties.
top=topNuSearchResults,
).get_data()
self.searchDf: pd.DataFrame = response1.data.df
searchDf: pd.DataFrame = self.searchDf
if debug:
print("searchDf")
display(searchDf)
try:
self.underlyingPrice: str = rd.get_history(
universe=[self.indexUnderlying],
fields=[underMrktPriceField],
interval="tick").iloc[-1][0]
except:
print("Function failed at the search strage, returning the following dataframe: ")
display(searchDf)
if debug:
print(f"Underlying {self.indexUnderlying}'s price recorded here was {self.underlyingPrice}")
display(searchDf.iloc[(searchDf.StrikePrice-self.underlyingPrice).abs().argsort()[:10]])
self.instrument: str = searchDf.iloc[(
searchDf.StrikePrice-self.underlyingPrice).abs().argsort()[:1]].RIC.values[0]
self.instrumentInfo: pd.DataFrame = searchDf.iloc[(
searchDf.StrikePrice-self.underlyingPrice).abs().argsort()[:1]]
self.ATMOpt = self.instrument
return self
def IPA_calc(
self,
dateBack: int = 3,
optnMrktPriceField: str = "TRDPRC_1",
debug: bool = False,
atOptionTradeOnly: bool = True,
riskFreeRate: Union[str, None] = None,
riskFreeRateField: Union[str, None] = None,
timeZoneInGraph: datetime = datetime.now().astimezone(),
requestFields: List[str] = [
"DeltaPercent", "GammaPercent", "RhoPercent",
"ThetaPercent", "VegaPercent"],
searchBatchMax: int = 100
) -> IndexImpVolatNGreeksIPACalc:
i: int # Type Hinted for loops coming up below.
k: str # Type Hinted for loops coming up below.
n: int # Type Hinted for loops coming up below.
m: int # Type Hinted for loops coming up below.
p: rdf._base_definition.BaseDefinition # Type Hinted for loops coming up below.
self.dateBack: int = dateBack
self.start: dtdate = self.timeOfCalcDatetime - pd.tseries.offsets.BDay(
self.dateBack)
self.startStr: str = (self.timeOfCalcDatetime - pd.tseries.offsets.BDay(
self.dateBack)).strftime('%Y-%m-%dT%H:%M:%S.%f') # e.g.: '2022-10-05T07:30:00.000'
self.endStr: str = self.timeOfCalcDatetime.strftime('%Y-%m-%dT%H:%M:%S.%f')
_optnMrktPrice: pd.DataFrame = rd.get_history(
universe=[self.instrument],
fields=[optnMrktPriceField],
interval="10min",
start=self.startStr, # Ought to always start at 4 am for OPRA exchanged Options, more info in the article below
end=self.endStr) # Ought to always end at 8 pm for OPRA exchanged Options, more info in the article below
if _optnMrktPrice.empty:
print(f"No data could be found for {self.instrument}, please check it on Refinitiv Workspace")
if debug:
print(self.instrument)
display(_optnMrktPrice)
# get a datapoint every 10 min
optnMrktPrice: pd.DataFrame = _optnMrktPrice.resample(
'10Min').mean()
# Only keep trading days
self.optnMrktPrice: pd.DataFrame = optnMrktPrice[
optnMrktPrice.index.strftime('%Y-%m-%d').isin(
[k for k in _optnMrktPrice.index.strftime('%Y-%m-%d').unique()])]
# Forward Fill to populate NaN values
self.optnMrktPrice.fillna(method='ffill', inplace=True)
# Note also that one may want to only look at 'At Option Trade' datapoints,
# i.e.: Implied Volatility when a trade is made for the Option, but not when
# none is made. For this, we will use the 'At Trade' (`AT`) dataframes:
if atOptionTradeOnly:
self.AToptnMrktPrice: pd.DataFrame = _optnMrktPrice
self.underlying: str = self.searchDf.iloc[
(self.searchDf.StrikePrice).abs().argsort()[
:1]].UnderlyingQuoteRIC.values[0][0]
_underlyingMrktPrice: pd.DataFrame = rd.get_history(
universe=[self.underlying],
fields=[self.underMrktPriceField],
interval="10min",
start=self.startStr,
end=self.endStr)
# Let's put it al in one data-frame, `df`. Some datasets will have data
# going from the time we set for `startStr` all the way to `endStr`. Some won't
# because no trade happened in the past few minutes/hours. We ought to base
# ourselves on the dataset with values getting closer to `end` and `ffill`
# for the other column. As a result, the following `if` loop is needed:
if optnMrktPrice.index[-1] >= _underlyingMrktPrice.index[-1]:
df: pd.DataFrame = self.optnMrktPrice.copy()
df[f"underlying {self.underlying} {self.underMrktPriceField}"] = _underlyingMrktPrice
else:
df = _underlyingMrktPrice.copy()
df.rename(
columns={self.underMrktPriceField:
f"underlying {self.underlying} {self.underMrktPriceField}"},
inplace=True)
df[self.underMrktPriceField] = self.optnMrktPrice
df.columns.name = self.optnMrktPrice.columns.name
df.fillna(method='ffill', inplace=True) # Forward Fill to populate NaN values
selfdf: pd.DataFrame = df.dropna()
if atOptionTradeOnly:
ATunderlyingMrktPrice: pd.DataFrame = self.AToptnMrktPrice.join(
_underlyingMrktPrice,
rsuffix=f"_{self.underlying}_underlying",
lsuffix=f"_{self.instrument}_OptPr",
how='inner')
self.strikePrice: pd.DataFrame = self.searchDf.iloc[
(self.searchDf['StrikePrice']-self.underlyingPrice).abs().argsort()[
:1]].StrikePrice.values[0]
# I didn't think that I needed to Type Hint for the event when
# `_riskFreeRate` & `_riskFreeRateField` were `None`, but Error Messages
# suggest otherwise...
_riskFreeRate: Union[str, None]
_riskFreeRateField: Union[str, None]
if riskFreeRate is None and self.indexUnderlying == ".SPX":
_riskFreeRate, _riskFreeRateField = 'USDCFCFCTSA3M=', 'TR.FIXINGVALUE'
elif riskFreeRate is None and self.indexUnderlying == ".STOXX50E":
_riskFreeRate, _riskFreeRateField = 'EURIBOR3MD=', 'TR.FIXINGVALUE'
elif riskFreeRate is not None:
_riskFreeRate, _riskFreeRateField = riskFreeRate, riskFreeRateField
self.riskFreeRate: Union[str, None] = riskFreeRate
self.riskFreeRateField: Union[str, None] = riskFreeRateField
_RfRate: pd.DataFrame = rd.get_history(
universe=[_riskFreeRate], # USD3MFSR=, USDSOFR=
fields=[_riskFreeRateField],
# Since we will use `dropna()` as a way to select the rows we are after later on in the code, we need to ask for more risk-free data than needed, just in case we don't have enough:
start=(self.start - timedelta(days=1)).strftime('%Y-%m-%d'), # https://teamtreehouse.com/community/local-variable-datetime-referenced-before-assignment
end=(self.timeOfCalcDatetime +
timedelta(days=1)).strftime('%Y-%m-%d'))
self.RfRate: pd.DataFrame = _RfRate.resample(
'10Min').mean().fillna(method='ffill')
df['RfRate'] = self.RfRate
self.df: pd.DataFrame = df.fillna(method='ffill')
if atOptionTradeOnly:
pd.options.mode.chained_assignment = None # default='warn'
ATunderlyingMrktPrice['RfRate'] = [
pd.NA for i in ATunderlyingMrktPrice.index]
for i in self.RfRate.index:
_i: str = str(i)[:10]
for n, m in enumerate(ATunderlyingMrktPrice.index):
if _i in str(m):
if len(self.RfRate.loc[i].values) == 2:
ATunderlyingMrktPrice[
'RfRate'].iloc[n] = self.RfRate.loc[i].values[0][0]
elif len(self.RfRate.loc[i].values) == 1:
ATunderlyingMrktPrice[
'RfRate'].iloc[n] = self.RfRate.loc[i].values[0]
self.ATdf: pd.DataFrame = ATunderlyingMrktPrice.copy().fillna(method='ffill') # This is in case there were no Risk Free datapoints released after a certain time, but trades on the option still went through.
if timeZoneInGraph != 'GMT':
if atOptionTradeOnly:
self.ATdf.index = [
self.ATdf.index[i].replace(
tzinfo=pytz.timezone(
'GMT')).astimezone(
tz=datetime.now().astimezone().tzinfo)
for i in range(len(self.ATdf))]
else:
df.index = [
df.index[i].replace(
tzinfo=pytz.timezone(
'GMT')).astimezone(
tz=timeZoneInGraph.tzinfo)
for i in range(len(df))]
# Define our message to the calculation endpoint in the RDP (Refinitiv Data Platform) API, `atOptionTradeOnly`:
self.universeL: List[rdf._base_definition.BaseDefinition]
if atOptionTradeOnly:
self.universeL = [
option.Definition(
underlying_type=option.UnderlyingType.ETI,
buy_sell='Buy',
instrument_code=self.instrument,
strike=float(self.strikePrice),
pricing_parameters=option.PricingParameters(
market_value_in_deal_ccy=float(
self.ATdf[
f"{optnMrktPriceField}_{self.instrument}_OptPr"][i]),
risk_free_rate_percent=float(self.ATdf['RfRate'][i]),
underlying_price=float(
self.ATdf[
f"{self.underMrktPriceField}_{self.underlying}_underlying"][i]),
pricing_model_type='BlackScholes',
volatility_type='Implied',
underlying_time_stamp='Default',
report_ccy='EUR'))
for i in range(len(self.ATdf.index))]
else:
self.universeL = [
option.Definition(
underlying_type=option.UnderlyingType.ETI,
buy_sell='Buy',
instrument_code=self.instrument,
strike=float(self.strikePrice),
pricing_parameters=option.PricingParameters(
market_value_in_deal_ccy=float(df[optnMrktPriceField][i]),
risk_free_rate_percent=float(df.RfRate[i]),
underlying_price=float(
df[f"underlying {self.underlying} {self.underMrktPriceField}"][i]),
pricing_model_type='BlackScholes',
volatility_type='Implied',
underlying_time_stamp='Default',
report_ccy='EUR'))
for i in range(len(df.index))]
# We would like to keep a minimum of these fields in the Search Responce in order to construct following graphs:
for k in ["MarketValueInDealCcy", "RiskFreeRatePercent",
"UnderlyingPrice", "PricingModelType", "DividendType",
"VolatilityType", "UnderlyingTimeStamp", "ReportCcy",
"VolatilityType", "Volatility"][::-1]:
requestFields.insert(0, k)
self.requestFields: List[str] = requestFields
for i, p in enumerate(
[self.universeL[i:i+searchBatchMax]
for i in range(0, len(self.universeL), searchBatchMax)]): # This list chunks our `universeL` in batches of `searchBatchMax`
_IPADf: pd.DataFrame = rdf.Definitions(
universe=p, fields=requestFields).get_data().data.df
if i == 0:
self.IPADf: pd.DataFrame = _IPADf
else:
self.IPADf: pd.DataFrame = self.IPADf.append(
_IPADf, ignore_index=True)
if atOptionTradeOnly:
self.IPADf.index = self.ATdf.index
else:
self.IPADf.index = self.df.index
self.atOptionTradeOnly: bool = atOptionTradeOnly
return self
def Simple_graph(
self,
maxColwidth: int = 200,
size: Tuple[int, int] = (15, 5),
lineStyle: str = '.-', # 'o-'
plotting: str = 'Volatility',
displayIndexInfo: bool = False
) -> IndexImpVolatNGreeksIPACalc:
# We are going to want to show details about data retreived in a dataframe in the output of this function. The one line below allows us to maximise the width (column) length of cells to see all that is written within them.
if displayIndexInfo:
pd.options.display.max_colwidth = maxColwidth
display(self.instrumentInfo)
IPADfSimpleGraph: pd.DataFrame = pd.DataFrame(
data=self.IPADf[[plotting]].values,
index=self.IPADf[[plotting]].index)
fig, axes = plt.subplots(ncols=1, figsize=size)
axes.plot(IPADfSimpleGraph, lineStyle)
if self.atOptionTradeOnly:
axes.set_title(f"{self.instrument} {plotting} At Trade Only")
else:
axes.set_title(f"{self.instrument} {plotting}")
self.plt = plt
return self
def Graph(
self,
include: Union[None, List[str]] = None,
graphTemplate: str = 'plotly_dark',
debug: bool=False
) -> IndexImpVolatNGreeksIPACalc:
if include is None:
include = self.requestFields
try: # This is needed in case there is not enough data to calculate values for all timestamps , see https://stackoverflow.com/questions/67244912/wide-format-csv-with-plotly-express
self.IPADfGraph = self.IPADf[include]
if debug: display(self.IPADfGraph)
self.fig = px.line(self.IPADfGraph)
except:
try:
print("Not all fields could be graphed")
self.IPADfGraph = self.IPADfGraph[
["ImpliedVolatility", "MarketValueInDealCcy",
"RiskFreeRatePercent", "UnderlyingPrice"]]
self.fig = px.line(self.IPADfGraph)
except:
print("Not all fields could be graphed")
self.IPADfGraph = self.IPADfGraph[
["Volatility", "MarketValueInDealCcy",
"RiskFreeRatePercent", "UnderlyingPrice"]]
self.fig = px.line(self.IPADfGraph)
self.graphTemplate = graphTemplate
return self
def Overlay(
self
) -> IndexImpVolatNGreeksIPACalc:
self.fig.update_layout(
title=self.instrument,
template=self.graphTemplate)
self.fig.for_each_trace(
lambda t: t.update(
visible=True if t.name in self.IPADfGraph.columns[:1] else "legendonly"))
return self
def Stack3(
self,
autosize: bool = False,
width: int = 1300,
height: int = 500
) -> IndexImpVolatNGreeksIPACalc:
self.fig = plotly.subplots.make_subplots(rows=3, cols=1)
self.fig.add_trace(go.Scatter(
x=self.IPADf.index, y=self.IPADfGraph.ImpliedVolatility,
name='Op Imp Volatility'), row=1, col=1)
self.fig.add_trace(go.Scatter(
x=self.IPADf.index, y=self.IPADfGraph.MarketValueInDealCcy,
name='Op Mk Pr'), row=2, col=1)
self.fig.add_trace(go.Scatter(
x=self.IPADf.index, y=self.IPADfGraph.UnderlyingPrice,
name=self.underlying+' Undrlyg Pr'), row=3, col=1)
self.fig.update(layout_xaxis_rangeslider_visible=False)
self.fig.update_layout(title=self.IPADfGraph.columns.name)
self.fig.update_layout(
title=self.instrument,
template=self.graphTemplate,
autosize=autosize,
width=width,
height=height)
return self
<cell>389: error: Attribute "IPADf" already defined on line 387 [no-redef]
from functools import wraps
from time import time
# IndexImpVolatNGreeksIPACalc().Search_index_opt_ATM().IPA_calc().Graph(debug=True).Overlay().fig.show()
s = time()
t1 = IndexImpVolatNGreeksIPACalc()
print(time() - s)
0.0
s = time()
t2 = t1.Search_index_opt_ATM()
print(time() - s)
5.280257225036621
s = time()
t3 = t2.IPA_calc()
print(time() - s)
9.758203268051147
s = time()
t3 = t2.Graph()
print(time() - s)
Not all fields could be graphed Not all fields could be graphed 0.07208371162414551
An error occurred while requesting URL('https://api.refinitiv.com/auth/oauth2/v1/token').
ConnectError('[Errno 11001] getaddrinfo failed')
[Error 0 - None] [Errno 11001] getaddrinfo failed
s = time()
t4 = t3.Overlay().fig.show()
print(time() - s)
0.021039724349975586
IndexImpVolatNGreeksIPACalc().Search_index_opt_ATM().IPA_calc().Simple_graph().plt.show()
IndexImpVolatNGreeksIPACalc().Search_index_opt_ATM().IPA_calc().Graph().Stack3().fig.show()
Not all fields could be graphed Not all fields could be graphed
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[111], line 441, in IndexImpVolatNGreeksIPACalc.Graph(self, include, graphTemplate, debug) 440 if debug: display(self.IPADfGraph) --> 441 self.fig = px.line(self.IPADfGraph) 442 except: File C:\JupyterLab\resources\jlab_server\lib\site-packages\plotly\express\_chart_types.py:264, in line(data_frame, x, y, line_group, color, line_dash, symbol, hover_name, hover_data, custom_data, text, facet_row, facet_col, facet_col_wrap, facet_row_spacing, facet_col_spacing, error_x, error_x_minus, error_y, error_y_minus, animation_frame, animation_group, category_orders, labels, orientation, color_discrete_sequence, color_discrete_map, line_dash_sequence, line_dash_map, symbol_sequence, symbol_map, markers, log_x, log_y, range_x, range_y, line_shape, render_mode, title, template, width, height) 260 """ 261 In a 2D line plot, each row of `data_frame` is represented as vertex of 262 a polyline mark in 2D space. 263 """ --> 264 return make_figure(args=locals(), constructor=go.Scatter) File C:\JupyterLab\resources\jlab_server\lib\site-packages\plotly\express\_core.py:1990, in make_figure(args, constructor, trace_patch, layout_patch) 1988 apply_default_cascade(args) -> 1990 args = build_dataframe(args, constructor) 1991 if constructor in [go.Treemap, go.Sunburst, go.Icicle] and args["path"] is not None: File C:\JupyterLab\resources\jlab_server\lib\site-packages\plotly\express\_core.py:1405, in build_dataframe(args, constructor) 1403 # now that things have been prepped, we do the systematic rewriting of `args` -> 1405 df_output, wide_id_vars = process_args_into_dataframe( 1406 args, wide_mode, var_name, value_name 1407 ) 1409 # now that `df_output` exists and `args` contains only references, we complete 1410 # the special-case and wide-mode handling by further rewriting args and/or mutating 1411 # df_output File C:\JupyterLab\resources\jlab_server\lib\site-packages\plotly\express\_core.py:1222, in process_args_into_dataframe(args, wide_mode, var_name, value_name) 1221 col_name = str(argument) -> 1222 df_output[col_name] = to_unindexed_series(df_input[argument]) 1223 # ----------------- argument is likely a column / array / list.... ------- 1224 else: File C:\JupyterLab\resources\jlab_server\lib\site-packages\plotly\express\_core.py:1073, in to_unindexed_series(x) 1067 """ 1068 assuming x is list-like or even an existing pd.Series, return a new pd.Series with 1069 no index, without extracting the data from an existing Series via numpy, which 1070 seems to mangle datetime columns. Stripping the index from existing pd.Series is 1071 required to get things to match up right in the new DataFrame we're building 1072 """ -> 1073 return pd.Series(x).reset_index(drop=True) File C:\JupyterLab\resources\jlab_server\lib\site-packages\pandas\core\series.py:355, in Series.__init__(self, data, index, dtype, name, copy, fastpath) 353 name = ibase.maybe_extract_name(name, data, type(self)) --> 355 if is_empty_data(data) and dtype is None: 356 # gh-17261 357 warnings.warn( 358 "The default dtype for empty Series will be 'object' instead " 359 "of 'float64' in a future version. Specify a dtype explicitly " (...) 362 stacklevel=2, 363 ) File C:\JupyterLab\resources\jlab_server\lib\site-packages\pandas\core\construction.py:796, in is_empty_data(data) 795 is_list_like_without_dtype = is_list_like(data) and not hasattr(data, "dtype") --> 796 is_simple_empty = is_list_like_without_dtype and not data 797 return is_none or is_simple_empty File C:\JupyterLab\resources\jlab_server\lib\site-packages\pandas\core\generic.py:1537, in NDFrame.__nonzero__(self) 1535 @final 1536 def __nonzero__(self): -> 1537 raise ValueError( 1538 f"The truth value of a {type(self).__name__} is ambiguous. " 1539 "Use a.empty, a.bool(), a.item(), a.any() or a.all()." 1540 ) ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). During handling of the above exception, another exception occurred: KeyError Traceback (most recent call last) Cell In[111], line 445, in IndexImpVolatNGreeksIPACalc.Graph(self, include, graphTemplate, debug) 444 print("Not all fields could be graphed") --> 445 self.IPADfGraph = self.IPADfGraph[ 446 ["ImpliedVolatility", "MarketValueInDealCcy", 447 "RiskFreeRatePercent", "UnderlyingPrice"]] 448 self.fig = px.line(self.IPADfGraph) File C:\JupyterLab\resources\jlab_server\lib\site-packages\pandas\core\frame.py:3464, in DataFrame.__getitem__(self, key) 3463 key = list(key) -> 3464 indexer = self.loc._get_listlike_indexer(key, axis=1)[1] 3466 # take() does not accept boolean indexers File C:\JupyterLab\resources\jlab_server\lib\site-packages\pandas\core\indexing.py:1314, in _LocIndexer._get_listlike_indexer(self, key, axis) 1312 keyarr, indexer, new_indexer = ax._reindex_non_unique(keyarr) -> 1314 self._validate_read_indexer(keyarr, indexer, axis) 1316 if needs_i8_conversion(ax.dtype) or isinstance( 1317 ax, (IntervalIndex, CategoricalIndex) 1318 ): 1319 # For CategoricalIndex take instead of reindex to preserve dtype. 1320 # For IntervalIndex this is to map integers to the Intervals they match to. File C:\JupyterLab\resources\jlab_server\lib\site-packages\pandas\core\indexing.py:1377, in _LocIndexer._validate_read_indexer(self, key, indexer, axis) 1376 not_found = list(ensure_index(key)[missing_mask.nonzero()[0]].unique()) -> 1377 raise KeyError(f"{not_found} not in index") KeyError: "['ImpliedVolatility'] not in index" During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) Cell In[114], line 1 ----> 1 IndexImpVolatNGreeksIPACalc().Search_index_opt_ATM().IPA_calc().Graph().Stack3().fig.show() Cell In[111], line 454, in IndexImpVolatNGreeksIPACalc.Graph(self, include, graphTemplate, debug) 450 print("Not all fields could be graphed") 451 self.IPADfGraph = self.IPADfGraph[ 452 ["Volatility", "MarketValueInDealCcy", 453 "RiskFreeRatePercent", "UnderlyingPrice"]] --> 454 self.fig = px.line(self.IPADfGraph) 456 self.graphTemplate = graphTemplate 458 return self File C:\JupyterLab\resources\jlab_server\lib\site-packages\plotly\express\_chart_types.py:264, in line(data_frame, x, y, line_group, color, line_dash, symbol, hover_name, hover_data, custom_data, text, facet_row, facet_col, facet_col_wrap, facet_row_spacing, facet_col_spacing, error_x, error_x_minus, error_y, error_y_minus, animation_frame, animation_group, category_orders, labels, orientation, color_discrete_sequence, color_discrete_map, line_dash_sequence, line_dash_map, symbol_sequence, symbol_map, markers, log_x, log_y, range_x, range_y, line_shape, render_mode, title, template, width, height) 216 def line( 217 data_frame=None, 218 x=None, (...) 258 height=None, 259 ) -> go.Figure: 260 """ 261 In a 2D line plot, each row of `data_frame` is represented as vertex of 262 a polyline mark in 2D space. 263 """ --> 264 return make_figure(args=locals(), constructor=go.Scatter) File C:\JupyterLab\resources\jlab_server\lib\site-packages\plotly\express\_core.py:1990, in make_figure(args, constructor, trace_patch, layout_patch) 1987 layout_patch = layout_patch or {} 1988 apply_default_cascade(args) -> 1990 args = build_dataframe(args, constructor) 1991 if constructor in [go.Treemap, go.Sunburst, go.Icicle] and args["path"] is not None: 1992 args = process_dataframe_hierarchy(args) File C:\JupyterLab\resources\jlab_server\lib\site-packages\plotly\express\_core.py:1405, in build_dataframe(args, constructor) 1402 args["color"] = None 1403 # now that things have been prepped, we do the systematic rewriting of `args` -> 1405 df_output, wide_id_vars = process_args_into_dataframe( 1406 args, wide_mode, var_name, value_name 1407 ) 1409 # now that `df_output` exists and `args` contains only references, we complete 1410 # the special-case and wide-mode handling by further rewriting args and/or mutating 1411 # df_output 1413 count_name = _escape_col_name(df_output, "count", [var_name, value_name]) File C:\JupyterLab\resources\jlab_server\lib\site-packages\plotly\express\_core.py:1222, in process_args_into_dataframe(args, wide_mode, var_name, value_name) 1220 else: 1221 col_name = str(argument) -> 1222 df_output[col_name] = to_unindexed_series(df_input[argument]) 1223 # ----------------- argument is likely a column / array / list.... ------- 1224 else: 1225 if df_provided and hasattr(argument, "name"): File C:\JupyterLab\resources\jlab_server\lib\site-packages\plotly\express\_core.py:1073, in to_unindexed_series(x) 1066 def to_unindexed_series(x): 1067 """ 1068 assuming x is list-like or even an existing pd.Series, return a new pd.Series with 1069 no index, without extracting the data from an existing Series via numpy, which 1070 seems to mangle datetime columns. Stripping the index from existing pd.Series is 1071 required to get things to match up right in the new DataFrame we're building 1072 """ -> 1073 return pd.Series(x).reset_index(drop=True) File C:\JupyterLab\resources\jlab_server\lib\site-packages\pandas\core\series.py:355, in Series.__init__(self, data, index, dtype, name, copy, fastpath) 351 else: 353 name = ibase.maybe_extract_name(name, data, type(self)) --> 355 if is_empty_data(data) and dtype is None: 356 # gh-17261 357 warnings.warn( 358 "The default dtype for empty Series will be 'object' instead " 359 "of 'float64' in a future version. Specify a dtype explicitly " (...) 362 stacklevel=2, 363 ) 364 # uncomment the line below when removing the DeprecationWarning 365 # dtype = np.dtype(object) File C:\JupyterLab\resources\jlab_server\lib\site-packages\pandas\core\construction.py:796, in is_empty_data(data) 794 is_none = data is None 795 is_list_like_without_dtype = is_list_like(data) and not hasattr(data, "dtype") --> 796 is_simple_empty = is_list_like_without_dtype and not data 797 return is_none or is_simple_empty File C:\JupyterLab\resources\jlab_server\lib\site-packages\pandas\core\generic.py:1537, in NDFrame.__nonzero__(self) 1535 @final 1536 def __nonzero__(self): -> 1537 raise ValueError( 1538 f"The truth value of a {type(self).__name__} is ambiguous. " 1539 "Use a.empty, a.bool(), a.item(), a.any() or a.all()." 1540 ) ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
test1 = IndexImpVolatNGreeksIPACalc(indexUnderlying=".SPX")
test2 = test1.Search_index_opt_ATM(
debug=False,
after=15,
callOrPut='Put',
searchFields=["ExchangeCode", "UnderlyingQuoteName"],
includeWeeklyOpts=False,
topNuSearchResults=10,
timeOfCalcDatetime=datetime.now(),
underMrktPriceField="TRDPRC_1")
test3 = test2.IPA_calc(
dateBack=3,
optnMrktPriceField="TRDPRC_1",
debug=False,
atOptionTradeOnly=True,
riskFreeRate=None,
riskFreeRateField=None,
timeZoneInGraph=datetime.now().astimezone())
test3.IPADf
test4 = test3.Simple_graph(
maxColwidth=200,
size=(15, 5),
lineStyle='.-', # 'o-'
plotting='Volatility'
)
test4.plt.show()
# rd.close_session() # It's good practice to close our RD session when done.
# # !pip3 install --trusted-host pypi.org dash_tvlwc
# # !pip3 install --trusted-host pypi.org dash
# import random
# import dash_tvlwc
# import dash
# from dash.dependencies import Input, Output, State
# from dash import html
# from datetime import datetime
# import random
# import pandas as pd
# def generate_random_series(*args, **kwargs):
# return generate_random_ohlc(*args, **kwargs, close_only=True)
# def generate_random_ohlc(v0: float, ret=0.05, n=500, t0='2021-01-01', close_only=False):
# datelist = [dt.strftime('%Y-%m-%d') for dt in pd.date_range(t0, periods=n).tolist()]
# res = []
# c = v0
# for dt in datelist:
# o = c
# c = o * (1 + random.uniform(-ret, ret))
# if not close_only:
# h = max(o, c) * (1 + random.uniform(0, ret))
# l = min(o, c) * (1 + random.uniform(-ret, 0))
# res.append({
# "time": dt,
# "open": o,
# "high": h,
# "low": l,
# "close": c
# })
# else:
# res.append({
# "time": dt,
# "value": c
# })
# o = c
# return res
# app = dash.Dash(__name__, external_stylesheets=['./assets/stylesheet.css'])
# chart_options = {
# 'layout': {
# 'background': {'type': 'solid', 'color': '#1B2631'},
# 'textColor': 'white',
# },
# 'grid': {
# 'vertLines': {'visible': False},
# 'horzLines': {'visible': False},
# },
# 'localization': {'locale': 'en-US'}
# }
# panel1 = [
# html.H2('Bar'),
# dash_tvlwc.Tvlwc(
# id='bar-chart',
# seriesData=[generate_random_ohlc(v0=100, n=50)],
# seriesTypes=['bar'],
# width='100%',
# chartOptions=chart_options
# )
# ]
# p2_series = generate_random_ohlc(v0=1, n=50, ret=0.1)
# p2_series = [{'time': v['time']} if 12 < idx < 20 or idx > 45 else v for idx, v in enumerate(p2_series)]
# panel2 = [
# html.H2('Candlestick'),
# dash_tvlwc.Tvlwc(
# id='candlestick-chart',
# seriesData=[p2_series],
# seriesTypes=['candlestick'],
# seriesOptions=[{
# 'downColor': '#a6269a',
# 'upColor': '#ffaa30',
# 'borderColor': 'black',
# 'wickColor': 'black'
# }],
# width='100%',
# chartOptions={'layout': {'background': {'type': 'solid', 'color': 'white'}}}
# )
# ]
# panel3 = [
# html.H2('Area'),
# dash_tvlwc.Tvlwc(
# id='area-chart',
# seriesData=[generate_random_series(v0=15, n=50)],
# seriesTypes=['area'],
# seriesOptions=[{
# 'lineColor': '#FFAA30',
# 'topColor': '#2962FF',
# 'bottomColor': 'rgba(180, 98, 200, 0.1)',
# 'priceLineWidth': 3,
# 'priceLineColor': 'red'
# }],
# width='100%',
# chartOptions=chart_options
# )
# ]
# p4_series = generate_random_series(v0=5000, n=50)
# p4_mean = sum([p['value'] for p in p4_series]) / 50
# p4_max = max([p['value'] for p in p4_series])
# price_lines = [{'price': p4_max, 'color': '#2962FF', 'lineStyle': 0, 'title': 'MAX PRICE', 'axisLabelVisible': True}]
# panel4 = [
# html.H2('Baseline'),
# dash_tvlwc.Tvlwc(
# id='baseline-chart',
# seriesData=[p4_series],
# seriesTypes=['baseline'],
# seriesOptions=[{
# 'baseValue': {'type': 'price', 'price': p4_mean},
# 'topFillColor1': 'black',
# 'topFillColor2': 'rgba(255,255,255,0)',
# 'topLineColor': 'black',
# 'crosshairMarkerRadius': 8,
# 'lineWidth': 5,
# 'priceScaleId': 'left'
# }],
# seriesPriceLines=[price_lines],
# width='100%',
# chartOptions={
# 'rightPriceScale': {'visible': False},
# 'leftPriceScale': {'visible': True, 'borderColor': 'rgba(197, 203, 206, 1)',},
# 'timeScale': {'visible': False},
# 'grid': {'vertLines': {'visible': False}, 'horzLines': {'style': 0, 'color': 'black'}},
# }
# )
# ]
# # add markers and add color to volume bar
# p5_series = generate_random_series(v0=1, n=50, ret=0.1)
# markers = [
# {'time': p5_series[15]['time'], 'position': 'aboveBar', 'color': '#f68410', 'shape': 'circle', 'text': 'Signal'},
# {'time': p5_series[20]['time'], 'position': 'belowBar', 'color': 'white', 'shape': 'arrowUp', 'text': 'Buy'}
# ]
# p5_series_volume = generate_random_series(v0=100, n=50, ret=0.05)
# for i in p5_series_volume:
# i['color'] = random.choice(['rgba(0, 150, 136, 0.8)', 'rgba(255,82,82, 0.8)'])
# panel5 = [
# html.H2('Line and volume'),
# dash_tvlwc.Tvlwc(
# id='line-chart',
# seriesData=[p5_series, p5_series_volume],
# seriesTypes=['line', 'histogram'],
# seriesOptions=[
# {
# 'lineWidth': 1
# },
# {
# 'color': '#26a69a',
# 'priceFormat': {'type': 'volume'},
# 'priceScaleId': '',
# 'scaleMargins': {'top': 0.9, 'bottom': 0},
# 'priceLineVisible': False
# },
# ],
# seriesMarkers=[markers],
# width='100%',
# chartOptions=chart_options
# )
# ]
# p6_series = generate_random_series(v0=100, n=50, ret=0.3)
# for idx, _ in enumerate(p6_series):
# if idx in [5,12,13,14,20,33,34,46]:
# p6_series[idx]['color'] = 'white'
# panel6 = [
# html.H2('Histogram'),
# dash_tvlwc.Tvlwc(
# id='histogram-chart',
# seriesData=[p6_series],
# seriesTypes=['histogram'],
# seriesOptions=[{
# 'color': '#ff80cc',
# 'base': 100,
# 'priceLineVisible': False,
# 'lastValueVisible': False
# }],
# width='100%',
# chartOptions={'layout': {'textColor': '#ff80cc', 'background': {'type': 'solid', 'color': 'black'}}}
# )
# ]
# app.layout = html.Div([
# html.H1('Chart options and series options'),
# html.Div(className='container', children=[
# html.Div(className='one', children=panel1),
# html.Div(className='two', children=panel2),
# html.Div(className='three', children=panel3),
# html.Div(className='four', children=panel4),
# html.Div(className='five', children=panel5),
# html.Div(className='six', children=panel6),
# ])
# ])
# if __name__ == '__main__':
# app.run_server(debug=True)
# !where python
# import os, signal
# os.kill(os.getpid(), signal.SIGTERM)
[Error 400 - invalid_grant] empty error description
As you can see, not only can we use IPA to gather large amounts of bespoke, calculated, values, but be can also portray this insight in a simple, quick and relevent way. The last cell in particular loops through our built fundction to give an updated graph every 5 seconds using 'legacy' technologies that would work in most environments (e.g.: Eikon Codebook).
Brilliant: Black-Scholes-Merton
What is the RIC syntax for options in Refinitiv Eikon?
Functions to find Option RICs traded on different exchanges
Making your code faster: Cython and parallel processing in the Jupyter Notebook
RIC nomenclature for expired Options on Futures
Expiration Dates for Expired Options API
Measure runtime of a Jupyter Notebook code cellMeasure runtime of a Jupyter Notebook code cell